00:00:00.003 Started by upstream project "autotest-per-patch" build number 126194 00:00:00.003 originally caused by: 00:00:00.003 Started by user sys_sgci 00:00:00.150 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.151 The recommended git tool is: git 00:00:00.151 using credential 00000000-0000-0000-0000-000000000002 00:00:00.153 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.196 Fetching changes from the remote Git repository 00:00:00.198 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.238 Using shallow fetch with depth 1 00:00:00.238 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.238 > git --version # timeout=10 00:00:00.270 > git --version # 'git version 2.39.2' 00:00:00.270 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.294 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.294 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.496 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.508 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.519 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.519 > git config core.sparsecheckout # timeout=10 00:00:06.531 > git read-tree -mu HEAD # timeout=10 00:00:06.548 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.570 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.570 > git rev-list --no-walk 1e4055c0ee28da4fa0007a72f92a6499a45bf65d # timeout=10 00:00:06.729 [Pipeline] Start of Pipeline 00:00:06.746 [Pipeline] library 00:00:06.748 Loading library shm_lib@master 00:00:10.578 Library shm_lib@master is cached. Copying from home. 00:00:10.619 [Pipeline] node 00:00:10.784 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_4 00:00:10.788 [Pipeline] { 00:00:10.800 [Pipeline] catchError 00:00:10.801 [Pipeline] { 00:00:10.862 [Pipeline] wrap 00:00:10.873 [Pipeline] { 00:00:10.890 [Pipeline] stage 00:00:10.894 [Pipeline] { (Prologue) 00:00:10.941 [Pipeline] echo 00:00:10.944 Node: VM-host-SM9 00:00:10.955 [Pipeline] cleanWs 00:00:10.965 [WS-CLEANUP] Deleting project workspace... 00:00:10.965 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.971 [WS-CLEANUP] done 00:00:11.325 [Pipeline] setCustomBuildProperty 00:00:11.421 [Pipeline] httpRequest 00:00:11.437 [Pipeline] echo 00:00:11.438 Sorcerer 10.211.164.101 is alive 00:00:11.445 [Pipeline] httpRequest 00:00:11.448 HttpMethod: GET 00:00:11.449 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:11.449 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:11.469 Response Code: HTTP/1.1 200 OK 00:00:11.469 Success: Status code 200 is in the accepted range: 200,404 00:00:11.470 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:14.468 [Pipeline] sh 00:00:14.751 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:14.773 [Pipeline] httpRequest 00:00:14.798 [Pipeline] echo 00:00:14.799 Sorcerer 10.211.164.101 is alive 00:00:14.808 [Pipeline] httpRequest 00:00:14.813 HttpMethod: GET 00:00:14.813 URL: http://10.211.164.101/packages/spdk_a62e924c8afe418362213f845772380d81d50319.tar.gz 00:00:14.814 Sending request to url: http://10.211.164.101/packages/spdk_a62e924c8afe418362213f845772380d81d50319.tar.gz 00:00:14.822 Response Code: HTTP/1.1 200 OK 00:00:14.822 Success: Status code 200 is in the accepted range: 200,404 00:00:14.823 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk_a62e924c8afe418362213f845772380d81d50319.tar.gz 00:00:42.301 [Pipeline] sh 00:00:42.577 + tar --no-same-owner -xf spdk_a62e924c8afe418362213f845772380d81d50319.tar.gz 00:00:45.933 [Pipeline] sh 00:00:46.213 + git -C spdk log --oneline -n5 00:00:46.213 a62e924c8 nvmf/tcp: Add support for the interrupt mode in NVMe-of TCP 00:00:46.213 2f3522da7 nvmf: move register nvmf_poll_group_poll interrupt to nvmf 00:00:46.213 ef59a6f4b nvmf/tcp: replace pending_buf_queue with nvmf_tcp_request_get_buffers 00:00:46.213 a26f69189 nvmf: enable iobuf based queuing for nvmf requests 00:00:46.213 24034319f nvmf/tcp: use sock group polling for the listening sockets 00:00:46.237 [Pipeline] writeFile 00:00:46.256 [Pipeline] sh 00:00:46.534 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:46.547 [Pipeline] sh 00:00:46.826 + cat autorun-spdk.conf 00:00:46.826 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.826 SPDK_TEST_NVMF=1 00:00:46.826 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.826 SPDK_TEST_USDT=1 00:00:46.826 SPDK_TEST_NVMF_MDNS=1 00:00:46.826 SPDK_RUN_UBSAN=1 00:00:46.826 NET_TYPE=virt 00:00:46.826 SPDK_JSONRPC_GO_CLIENT=1 00:00:46.826 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:46.833 RUN_NIGHTLY=0 00:00:46.836 [Pipeline] } 00:00:46.852 [Pipeline] // stage 00:00:46.870 [Pipeline] stage 00:00:46.872 [Pipeline] { (Run VM) 00:00:46.885 [Pipeline] sh 00:00:47.164 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:47.164 + echo 'Start stage prepare_nvme.sh' 00:00:47.164 Start stage prepare_nvme.sh 00:00:47.164 + [[ -n 2 ]] 00:00:47.164 + disk_prefix=ex2 00:00:47.164 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_4 ]] 00:00:47.164 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/autorun-spdk.conf ]] 00:00:47.164 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/autorun-spdk.conf 00:00:47.164 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.164 ++ SPDK_TEST_NVMF=1 00:00:47.164 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.164 ++ SPDK_TEST_USDT=1 00:00:47.164 ++ SPDK_TEST_NVMF_MDNS=1 00:00:47.164 ++ SPDK_RUN_UBSAN=1 00:00:47.164 ++ NET_TYPE=virt 00:00:47.164 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:47.164 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:47.164 ++ RUN_NIGHTLY=0 00:00:47.165 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_4 00:00:47.165 + nvme_files=() 00:00:47.165 + declare -A nvme_files 00:00:47.165 + backend_dir=/var/lib/libvirt/images/backends 00:00:47.165 + nvme_files['nvme.img']=5G 00:00:47.165 + nvme_files['nvme-cmb.img']=5G 00:00:47.165 + nvme_files['nvme-multi0.img']=4G 00:00:47.165 + nvme_files['nvme-multi1.img']=4G 00:00:47.165 + nvme_files['nvme-multi2.img']=4G 00:00:47.165 + nvme_files['nvme-openstack.img']=8G 00:00:47.165 + nvme_files['nvme-zns.img']=5G 00:00:47.165 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:47.165 + (( SPDK_TEST_FTL == 1 )) 00:00:47.165 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:47.165 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:47.165 + for nvme in "${!nvme_files[@]}" 00:00:47.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:47.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.165 + for nvme in "${!nvme_files[@]}" 00:00:47.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:47.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.165 + for nvme in "${!nvme_files[@]}" 00:00:47.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:47.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:47.165 + for nvme in "${!nvme_files[@]}" 00:00:47.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:47.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.165 + for nvme in "${!nvme_files[@]}" 00:00:47.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:47.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.165 + for nvme in "${!nvme_files[@]}" 00:00:47.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:47.165 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.165 + for nvme in "${!nvme_files[@]}" 00:00:47.165 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:47.730 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.730 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:47.730 + echo 'End stage prepare_nvme.sh' 00:00:47.730 End stage prepare_nvme.sh 00:00:47.743 [Pipeline] sh 00:00:48.020 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:48.020 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:00:48.020 00:00:48.020 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk/scripts/vagrant 00:00:48.020 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk 00:00:48.020 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_4 00:00:48.020 HELP=0 00:00:48.020 DRY_RUN=0 00:00:48.020 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:48.020 NVME_DISKS_TYPE=nvme,nvme, 00:00:48.020 NVME_AUTO_CREATE=0 00:00:48.020 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:48.020 NVME_CMB=,, 00:00:48.020 NVME_PMR=,, 00:00:48.020 NVME_ZNS=,, 00:00:48.020 NVME_MS=,, 00:00:48.020 NVME_FDP=,, 00:00:48.020 SPDK_VAGRANT_DISTRO=fedora38 00:00:48.020 SPDK_VAGRANT_VMCPU=10 00:00:48.020 SPDK_VAGRANT_VMRAM=12288 00:00:48.020 SPDK_VAGRANT_PROVIDER=libvirt 00:00:48.020 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:48.020 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:48.021 SPDK_OPENSTACK_NETWORK=0 00:00:48.021 VAGRANT_PACKAGE_BOX=0 00:00:48.021 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk/scripts/vagrant/Vagrantfile 00:00:48.021 FORCE_DISTRO=true 00:00:48.021 VAGRANT_BOX_VERSION= 00:00:48.021 EXTRA_VAGRANTFILES= 00:00:48.021 NIC_MODEL=e1000 00:00:48.021 00:00:48.021 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora38-libvirt' 00:00:48.021 /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_4 00:00:52.238 Bringing machine 'default' up with 'libvirt' provider... 00:00:53.173 ==> default: Creating image (snapshot of base box volume). 00:00:53.431 ==> default: Creating domain with the following settings... 00:00:53.431 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721047505_4e47c307f5762a1bd8cc 00:00:53.431 ==> default: -- Domain type: kvm 00:00:53.431 ==> default: -- Cpus: 10 00:00:53.431 ==> default: -- Feature: acpi 00:00:53.431 ==> default: -- Feature: apic 00:00:53.431 ==> default: -- Feature: pae 00:00:53.431 ==> default: -- Memory: 12288M 00:00:53.431 ==> default: -- Memory Backing: hugepages: 00:00:53.431 ==> default: -- Management MAC: 00:00:53.431 ==> default: -- Loader: 00:00:53.431 ==> default: -- Nvram: 00:00:53.431 ==> default: -- Base box: spdk/fedora38 00:00:53.431 ==> default: -- Storage pool: default 00:00:53.431 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721047505_4e47c307f5762a1bd8cc.img (20G) 00:00:53.431 ==> default: -- Volume Cache: default 00:00:53.431 ==> default: -- Kernel: 00:00:53.431 ==> default: -- Initrd: 00:00:53.431 ==> default: -- Graphics Type: vnc 00:00:53.431 ==> default: -- Graphics Port: -1 00:00:53.431 ==> default: -- Graphics IP: 127.0.0.1 00:00:53.431 ==> default: -- Graphics Password: Not defined 00:00:53.431 ==> default: -- Video Type: cirrus 00:00:53.431 ==> default: -- Video VRAM: 9216 00:00:53.431 ==> default: -- Sound Type: 00:00:53.431 ==> default: -- Keymap: en-us 00:00:53.431 ==> default: -- TPM Path: 00:00:53.431 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:53.431 ==> default: -- Command line args: 00:00:53.431 ==> default: -> value=-device, 00:00:53.431 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:53.431 ==> default: -> value=-drive, 00:00:53.431 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:53.431 ==> default: -> value=-device, 00:00:53.431 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.431 ==> default: -> value=-device, 00:00:53.431 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:53.431 ==> default: -> value=-drive, 00:00:53.431 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:53.431 ==> default: -> value=-device, 00:00:53.431 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.431 ==> default: -> value=-drive, 00:00:53.431 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:53.431 ==> default: -> value=-device, 00:00:53.431 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.431 ==> default: -> value=-drive, 00:00:53.431 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:53.431 ==> default: -> value=-device, 00:00:53.431 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:53.431 ==> default: Creating shared folders metadata... 00:00:53.689 ==> default: Starting domain. 00:00:55.612 ==> default: Waiting for domain to get an IP address... 00:01:13.687 ==> default: Waiting for SSH to become available... 00:01:13.687 ==> default: Configuring and enabling network interfaces... 00:01:16.212 default: SSH address: 192.168.121.40:22 00:01:16.212 default: SSH username: vagrant 00:01:16.212 default: SSH auth method: private key 00:01:18.110 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_4/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:26.218 ==> default: Mounting SSHFS shared folder... 00:01:28.201 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:28.201 ==> default: Checking Mount.. 00:01:29.133 ==> default: Folder Successfully Mounted! 00:01:29.133 ==> default: Running provisioner: file... 00:01:29.697 default: ~/.gitconfig => .gitconfig 00:01:29.955 00:01:29.955 SUCCESS! 00:01:29.955 00:01:29.955 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora38-libvirt and type "vagrant ssh" to use. 00:01:29.955 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:29.955 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora38-libvirt" to destroy all trace of vm. 00:01:29.955 00:01:29.963 [Pipeline] } 00:01:29.980 [Pipeline] // stage 00:01:29.988 [Pipeline] dir 00:01:29.989 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora38-libvirt 00:01:29.990 [Pipeline] { 00:01:30.003 [Pipeline] catchError 00:01:30.004 [Pipeline] { 00:01:30.016 [Pipeline] sh 00:01:30.293 + sed -ne /^Host/,$p 00:01:30.293 + + vagrant ssh-config --host vagranttee 00:01:30.293 ssh_conf 00:01:35.565 Host vagrant 00:01:35.565 HostName 192.168.121.40 00:01:35.565 User vagrant 00:01:35.565 Port 22 00:01:35.565 UserKnownHostsFile /dev/null 00:01:35.565 StrictHostKeyChecking no 00:01:35.565 PasswordAuthentication no 00:01:35.565 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:35.565 IdentitiesOnly yes 00:01:35.565 LogLevel FATAL 00:01:35.565 ForwardAgent yes 00:01:35.565 ForwardX11 yes 00:01:35.565 00:01:35.607 [Pipeline] withEnv 00:01:35.610 [Pipeline] { 00:01:35.626 [Pipeline] sh 00:01:35.904 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:35.904 source /etc/os-release 00:01:35.904 [[ -e /image.version ]] && img=$(< /image.version) 00:01:35.904 # Minimal, systemd-like check. 00:01:35.904 if [[ -e /.dockerenv ]]; then 00:01:35.904 # Clear garbage from the node's name: 00:01:35.904 # agt-er_autotest_547-896 -> autotest_547-896 00:01:35.904 # $HOSTNAME is the actual container id 00:01:35.904 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:35.904 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:35.904 # We can assume this is a mount from a host where container is running, 00:01:35.904 # so fetch its hostname to easily identify the target swarm worker. 00:01:35.904 container="$(< /etc/hostname) ($agent)" 00:01:35.904 else 00:01:35.904 # Fallback 00:01:35.904 container=$agent 00:01:35.904 fi 00:01:35.904 fi 00:01:35.904 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:35.904 00:01:35.914 [Pipeline] } 00:01:35.937 [Pipeline] // withEnv 00:01:35.945 [Pipeline] setCustomBuildProperty 00:01:35.962 [Pipeline] stage 00:01:35.965 [Pipeline] { (Tests) 00:01:35.985 [Pipeline] sh 00:01:36.266 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:36.280 [Pipeline] sh 00:01:36.555 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:36.827 [Pipeline] timeout 00:01:36.827 Timeout set to expire in 40 min 00:01:36.829 [Pipeline] { 00:01:36.847 [Pipeline] sh 00:01:37.124 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:37.690 HEAD is now at a62e924c8 nvmf/tcp: Add support for the interrupt mode in NVMe-of TCP 00:01:37.703 [Pipeline] sh 00:01:37.979 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:38.250 [Pipeline] sh 00:01:38.528 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:38.550 [Pipeline] sh 00:01:38.883 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:38.883 ++ readlink -f spdk_repo 00:01:38.883 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:38.883 + [[ -n /home/vagrant/spdk_repo ]] 00:01:38.883 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:38.883 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:38.883 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:38.883 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:38.883 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:38.883 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:38.883 + cd /home/vagrant/spdk_repo 00:01:38.883 + source /etc/os-release 00:01:38.883 ++ NAME='Fedora Linux' 00:01:38.883 ++ VERSION='38 (Cloud Edition)' 00:01:38.883 ++ ID=fedora 00:01:38.883 ++ VERSION_ID=38 00:01:38.883 ++ VERSION_CODENAME= 00:01:38.883 ++ PLATFORM_ID=platform:f38 00:01:38.883 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:38.884 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.884 ++ LOGO=fedora-logo-icon 00:01:38.884 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:38.884 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.884 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:38.884 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.884 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.884 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.884 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:38.884 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.884 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:38.884 ++ SUPPORT_END=2024-05-14 00:01:38.884 ++ VARIANT='Cloud Edition' 00:01:38.884 ++ VARIANT_ID=cloud 00:01:38.884 + uname -a 00:01:38.884 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:38.884 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:39.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:39.450 Hugepages 00:01:39.450 node hugesize free / total 00:01:39.450 node0 1048576kB 0 / 0 00:01:39.450 node0 2048kB 0 / 0 00:01:39.450 00:01:39.450 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:39.450 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:39.450 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:39.450 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:39.450 + rm -f /tmp/spdk-ld-path 00:01:39.450 + source autorun-spdk.conf 00:01:39.450 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.450 ++ SPDK_TEST_NVMF=1 00:01:39.450 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.450 ++ SPDK_TEST_USDT=1 00:01:39.451 ++ SPDK_TEST_NVMF_MDNS=1 00:01:39.451 ++ SPDK_RUN_UBSAN=1 00:01:39.451 ++ NET_TYPE=virt 00:01:39.451 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:39.451 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.451 ++ RUN_NIGHTLY=0 00:01:39.451 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:39.451 + [[ -n '' ]] 00:01:39.451 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:39.451 + for M in /var/spdk/build-*-manifest.txt 00:01:39.451 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.451 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.709 + for M in /var/spdk/build-*-manifest.txt 00:01:39.709 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.709 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.709 ++ uname 00:01:39.709 + [[ Linux == \L\i\n\u\x ]] 00:01:39.709 + sudo dmesg -T 00:01:39.709 + sudo dmesg --clear 00:01:39.709 + dmesg_pid=5159 00:01:39.709 + sudo dmesg -Tw 00:01:39.709 + [[ Fedora Linux == FreeBSD ]] 00:01:39.709 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.709 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.709 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.709 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.709 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.709 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.709 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.709 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.709 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.709 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.709 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.709 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.709 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.709 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.709 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.709 Test configuration: 00:01:39.709 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.709 SPDK_TEST_NVMF=1 00:01:39.709 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.709 SPDK_TEST_USDT=1 00:01:39.709 SPDK_TEST_NVMF_MDNS=1 00:01:39.709 SPDK_RUN_UBSAN=1 00:01:39.709 NET_TYPE=virt 00:01:39.709 SPDK_JSONRPC_GO_CLIENT=1 00:01:39.709 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.709 RUN_NIGHTLY=0 12:45:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:39.709 12:45:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.709 12:45:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.709 12:45:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.709 12:45:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.709 12:45:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.709 12:45:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.709 12:45:52 -- paths/export.sh@5 -- $ export PATH 00:01:39.709 12:45:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.709 12:45:52 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:39.709 12:45:52 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:39.709 12:45:52 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721047552.XXXXXX 00:01:39.709 12:45:52 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721047552.zKV8ES 00:01:39.709 12:45:52 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:39.709 12:45:52 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:39.709 12:45:52 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:39.709 12:45:52 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:39.710 12:45:52 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.710 12:45:52 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:39.710 12:45:52 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:39.710 12:45:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.710 12:45:52 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:39.710 12:45:52 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:39.710 12:45:52 -- pm/common@17 -- $ local monitor 00:01:39.710 12:45:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.710 12:45:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.710 12:45:52 -- pm/common@21 -- $ date +%s 00:01:39.710 12:45:52 -- pm/common@25 -- $ sleep 1 00:01:39.710 12:45:52 -- pm/common@21 -- $ date +%s 00:01:39.710 12:45:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721047552 00:01:39.710 12:45:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721047552 00:01:39.710 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721047552_collect-vmstat.pm.log 00:01:39.710 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721047552_collect-cpu-load.pm.log 00:01:41.084 12:45:53 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:41.084 12:45:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.084 12:45:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.084 12:45:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:41.084 12:45:53 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.084 Mon Jul 15 12:45:53 PM UTC 2024 00:01:41.084 12:45:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.084 v24.09-pre-213-ga62e924c8 00:01:41.084 12:45:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:41.084 12:45:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.084 12:45:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.084 12:45:53 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:41.084 12:45:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.084 12:45:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.084 ************************************ 00:01:41.084 START TEST ubsan 00:01:41.084 ************************************ 00:01:41.084 using ubsan 00:01:41.084 12:45:53 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:41.084 00:01:41.084 real 0m0.000s 00:01:41.084 user 0m0.000s 00:01:41.084 sys 0m0.000s 00:01:41.084 12:45:53 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:41.084 12:45:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.084 ************************************ 00:01:41.084 END TEST ubsan 00:01:41.084 ************************************ 00:01:41.084 12:45:53 -- common/autotest_common.sh@1142 -- $ return 0 00:01:41.084 12:45:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:41.084 12:45:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:41.084 12:45:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:41.084 12:45:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:41.084 12:45:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:41.084 12:45:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:41.084 12:45:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:41.084 12:45:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:41.084 12:45:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:41.084 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:41.084 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:41.341 Using 'verbs' RDMA provider 00:01:54.485 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:06.689 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:06.689 go version go1.21.1 linux/amd64 00:02:06.947 Creating mk/config.mk...done. 00:02:06.947 Creating mk/cc.flags.mk...done. 00:02:06.947 Type 'make' to build. 00:02:06.947 12:46:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:06.947 12:46:19 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:06.947 12:46:19 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:06.947 12:46:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.947 ************************************ 00:02:06.947 START TEST make 00:02:06.947 ************************************ 00:02:06.947 12:46:19 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:07.204 make[1]: Nothing to be done for 'all'. 00:02:29.206 The Meson build system 00:02:29.206 Version: 1.3.1 00:02:29.206 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:29.206 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:29.206 Build type: native build 00:02:29.206 Program cat found: YES (/usr/bin/cat) 00:02:29.206 Project name: DPDK 00:02:29.206 Project version: 24.03.0 00:02:29.206 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:29.206 C linker for the host machine: cc ld.bfd 2.39-16 00:02:29.206 Host machine cpu family: x86_64 00:02:29.206 Host machine cpu: x86_64 00:02:29.206 Message: ## Building in Developer Mode ## 00:02:29.206 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:29.206 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:29.206 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:29.206 Program python3 found: YES (/usr/bin/python3) 00:02:29.206 Program cat found: YES (/usr/bin/cat) 00:02:29.206 Compiler for C supports arguments -march=native: YES 00:02:29.206 Checking for size of "void *" : 8 00:02:29.206 Checking for size of "void *" : 8 (cached) 00:02:29.206 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:29.206 Library m found: YES 00:02:29.206 Library numa found: YES 00:02:29.206 Has header "numaif.h" : YES 00:02:29.206 Library fdt found: NO 00:02:29.206 Library execinfo found: NO 00:02:29.206 Has header "execinfo.h" : YES 00:02:29.206 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:29.206 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:29.206 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:29.206 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:29.206 Run-time dependency openssl found: YES 3.0.9 00:02:29.206 Run-time dependency libpcap found: YES 1.10.4 00:02:29.206 Has header "pcap.h" with dependency libpcap: YES 00:02:29.206 Compiler for C supports arguments -Wcast-qual: YES 00:02:29.206 Compiler for C supports arguments -Wdeprecated: YES 00:02:29.206 Compiler for C supports arguments -Wformat: YES 00:02:29.206 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:29.206 Compiler for C supports arguments -Wformat-security: NO 00:02:29.206 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.206 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:29.206 Compiler for C supports arguments -Wnested-externs: YES 00:02:29.206 Compiler for C supports arguments -Wold-style-definition: YES 00:02:29.206 Compiler for C supports arguments -Wpointer-arith: YES 00:02:29.206 Compiler for C supports arguments -Wsign-compare: YES 00:02:29.206 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:29.206 Compiler for C supports arguments -Wundef: YES 00:02:29.206 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.206 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:29.206 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:29.206 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.206 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:29.206 Program objdump found: YES (/usr/bin/objdump) 00:02:29.206 Compiler for C supports arguments -mavx512f: YES 00:02:29.206 Checking if "AVX512 checking" compiles: YES 00:02:29.206 Fetching value of define "__SSE4_2__" : 1 00:02:29.206 Fetching value of define "__AES__" : 1 00:02:29.206 Fetching value of define "__AVX__" : 1 00:02:29.206 Fetching value of define "__AVX2__" : 1 00:02:29.206 Fetching value of define "__AVX512BW__" : (undefined) 00:02:29.206 Fetching value of define "__AVX512CD__" : (undefined) 00:02:29.206 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:29.206 Fetching value of define "__AVX512F__" : (undefined) 00:02:29.206 Fetching value of define "__AVX512VL__" : (undefined) 00:02:29.206 Fetching value of define "__PCLMUL__" : 1 00:02:29.206 Fetching value of define "__RDRND__" : 1 00:02:29.206 Fetching value of define "__RDSEED__" : 1 00:02:29.206 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:29.206 Fetching value of define "__znver1__" : (undefined) 00:02:29.206 Fetching value of define "__znver2__" : (undefined) 00:02:29.206 Fetching value of define "__znver3__" : (undefined) 00:02:29.206 Fetching value of define "__znver4__" : (undefined) 00:02:29.206 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:29.206 Message: lib/log: Defining dependency "log" 00:02:29.206 Message: lib/kvargs: Defining dependency "kvargs" 00:02:29.206 Message: lib/telemetry: Defining dependency "telemetry" 00:02:29.206 Checking for function "getentropy" : NO 00:02:29.206 Message: lib/eal: Defining dependency "eal" 00:02:29.206 Message: lib/ring: Defining dependency "ring" 00:02:29.206 Message: lib/rcu: Defining dependency "rcu" 00:02:29.206 Message: lib/mempool: Defining dependency "mempool" 00:02:29.206 Message: lib/mbuf: Defining dependency "mbuf" 00:02:29.206 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:29.206 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:29.206 Compiler for C supports arguments -mpclmul: YES 00:02:29.206 Compiler for C supports arguments -maes: YES 00:02:29.206 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.206 Compiler for C supports arguments -mavx512bw: YES 00:02:29.206 Compiler for C supports arguments -mavx512dq: YES 00:02:29.206 Compiler for C supports arguments -mavx512vl: YES 00:02:29.206 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:29.206 Compiler for C supports arguments -mavx2: YES 00:02:29.206 Compiler for C supports arguments -mavx: YES 00:02:29.206 Message: lib/net: Defining dependency "net" 00:02:29.207 Message: lib/meter: Defining dependency "meter" 00:02:29.207 Message: lib/ethdev: Defining dependency "ethdev" 00:02:29.207 Message: lib/pci: Defining dependency "pci" 00:02:29.207 Message: lib/cmdline: Defining dependency "cmdline" 00:02:29.207 Message: lib/hash: Defining dependency "hash" 00:02:29.207 Message: lib/timer: Defining dependency "timer" 00:02:29.207 Message: lib/compressdev: Defining dependency "compressdev" 00:02:29.207 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:29.207 Message: lib/dmadev: Defining dependency "dmadev" 00:02:29.207 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:29.207 Message: lib/power: Defining dependency "power" 00:02:29.207 Message: lib/reorder: Defining dependency "reorder" 00:02:29.207 Message: lib/security: Defining dependency "security" 00:02:29.207 Has header "linux/userfaultfd.h" : YES 00:02:29.207 Has header "linux/vduse.h" : YES 00:02:29.207 Message: lib/vhost: Defining dependency "vhost" 00:02:29.207 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.207 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.207 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:29.207 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:29.207 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:29.207 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:29.207 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:29.207 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:29.207 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:29.207 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:29.207 Program doxygen found: YES (/usr/bin/doxygen) 00:02:29.207 Configuring doxy-api-html.conf using configuration 00:02:29.207 Configuring doxy-api-man.conf using configuration 00:02:29.207 Program mandb found: YES (/usr/bin/mandb) 00:02:29.207 Program sphinx-build found: NO 00:02:29.207 Configuring rte_build_config.h using configuration 00:02:29.207 Message: 00:02:29.207 ================= 00:02:29.207 Applications Enabled 00:02:29.207 ================= 00:02:29.207 00:02:29.207 apps: 00:02:29.207 00:02:29.207 00:02:29.207 Message: 00:02:29.207 ================= 00:02:29.207 Libraries Enabled 00:02:29.207 ================= 00:02:29.207 00:02:29.207 libs: 00:02:29.207 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:29.207 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:29.207 cryptodev, dmadev, power, reorder, security, vhost, 00:02:29.207 00:02:29.207 Message: 00:02:29.207 =============== 00:02:29.207 Drivers Enabled 00:02:29.207 =============== 00:02:29.207 00:02:29.207 common: 00:02:29.207 00:02:29.207 bus: 00:02:29.207 pci, vdev, 00:02:29.207 mempool: 00:02:29.207 ring, 00:02:29.207 dma: 00:02:29.207 00:02:29.207 net: 00:02:29.207 00:02:29.207 crypto: 00:02:29.207 00:02:29.207 compress: 00:02:29.207 00:02:29.207 vdpa: 00:02:29.207 00:02:29.207 00:02:29.207 Message: 00:02:29.207 ================= 00:02:29.207 Content Skipped 00:02:29.207 ================= 00:02:29.207 00:02:29.207 apps: 00:02:29.207 dumpcap: explicitly disabled via build config 00:02:29.207 graph: explicitly disabled via build config 00:02:29.207 pdump: explicitly disabled via build config 00:02:29.207 proc-info: explicitly disabled via build config 00:02:29.207 test-acl: explicitly disabled via build config 00:02:29.207 test-bbdev: explicitly disabled via build config 00:02:29.207 test-cmdline: explicitly disabled via build config 00:02:29.207 test-compress-perf: explicitly disabled via build config 00:02:29.207 test-crypto-perf: explicitly disabled via build config 00:02:29.207 test-dma-perf: explicitly disabled via build config 00:02:29.207 test-eventdev: explicitly disabled via build config 00:02:29.207 test-fib: explicitly disabled via build config 00:02:29.207 test-flow-perf: explicitly disabled via build config 00:02:29.207 test-gpudev: explicitly disabled via build config 00:02:29.207 test-mldev: explicitly disabled via build config 00:02:29.207 test-pipeline: explicitly disabled via build config 00:02:29.207 test-pmd: explicitly disabled via build config 00:02:29.207 test-regex: explicitly disabled via build config 00:02:29.207 test-sad: explicitly disabled via build config 00:02:29.207 test-security-perf: explicitly disabled via build config 00:02:29.207 00:02:29.207 libs: 00:02:29.207 argparse: explicitly disabled via build config 00:02:29.207 metrics: explicitly disabled via build config 00:02:29.207 acl: explicitly disabled via build config 00:02:29.207 bbdev: explicitly disabled via build config 00:02:29.207 bitratestats: explicitly disabled via build config 00:02:29.207 bpf: explicitly disabled via build config 00:02:29.207 cfgfile: explicitly disabled via build config 00:02:29.207 distributor: explicitly disabled via build config 00:02:29.207 efd: explicitly disabled via build config 00:02:29.207 eventdev: explicitly disabled via build config 00:02:29.207 dispatcher: explicitly disabled via build config 00:02:29.207 gpudev: explicitly disabled via build config 00:02:29.207 gro: explicitly disabled via build config 00:02:29.207 gso: explicitly disabled via build config 00:02:29.207 ip_frag: explicitly disabled via build config 00:02:29.207 jobstats: explicitly disabled via build config 00:02:29.207 latencystats: explicitly disabled via build config 00:02:29.207 lpm: explicitly disabled via build config 00:02:29.207 member: explicitly disabled via build config 00:02:29.207 pcapng: explicitly disabled via build config 00:02:29.207 rawdev: explicitly disabled via build config 00:02:29.207 regexdev: explicitly disabled via build config 00:02:29.207 mldev: explicitly disabled via build config 00:02:29.207 rib: explicitly disabled via build config 00:02:29.207 sched: explicitly disabled via build config 00:02:29.207 stack: explicitly disabled via build config 00:02:29.207 ipsec: explicitly disabled via build config 00:02:29.207 pdcp: explicitly disabled via build config 00:02:29.207 fib: explicitly disabled via build config 00:02:29.207 port: explicitly disabled via build config 00:02:29.207 pdump: explicitly disabled via build config 00:02:29.207 table: explicitly disabled via build config 00:02:29.207 pipeline: explicitly disabled via build config 00:02:29.207 graph: explicitly disabled via build config 00:02:29.207 node: explicitly disabled via build config 00:02:29.207 00:02:29.207 drivers: 00:02:29.207 common/cpt: not in enabled drivers build config 00:02:29.207 common/dpaax: not in enabled drivers build config 00:02:29.207 common/iavf: not in enabled drivers build config 00:02:29.207 common/idpf: not in enabled drivers build config 00:02:29.207 common/ionic: not in enabled drivers build config 00:02:29.207 common/mvep: not in enabled drivers build config 00:02:29.207 common/octeontx: not in enabled drivers build config 00:02:29.207 bus/auxiliary: not in enabled drivers build config 00:02:29.207 bus/cdx: not in enabled drivers build config 00:02:29.207 bus/dpaa: not in enabled drivers build config 00:02:29.207 bus/fslmc: not in enabled drivers build config 00:02:29.207 bus/ifpga: not in enabled drivers build config 00:02:29.207 bus/platform: not in enabled drivers build config 00:02:29.207 bus/uacce: not in enabled drivers build config 00:02:29.207 bus/vmbus: not in enabled drivers build config 00:02:29.207 common/cnxk: not in enabled drivers build config 00:02:29.207 common/mlx5: not in enabled drivers build config 00:02:29.207 common/nfp: not in enabled drivers build config 00:02:29.207 common/nitrox: not in enabled drivers build config 00:02:29.207 common/qat: not in enabled drivers build config 00:02:29.207 common/sfc_efx: not in enabled drivers build config 00:02:29.207 mempool/bucket: not in enabled drivers build config 00:02:29.207 mempool/cnxk: not in enabled drivers build config 00:02:29.207 mempool/dpaa: not in enabled drivers build config 00:02:29.207 mempool/dpaa2: not in enabled drivers build config 00:02:29.207 mempool/octeontx: not in enabled drivers build config 00:02:29.207 mempool/stack: not in enabled drivers build config 00:02:29.207 dma/cnxk: not in enabled drivers build config 00:02:29.207 dma/dpaa: not in enabled drivers build config 00:02:29.207 dma/dpaa2: not in enabled drivers build config 00:02:29.207 dma/hisilicon: not in enabled drivers build config 00:02:29.207 dma/idxd: not in enabled drivers build config 00:02:29.207 dma/ioat: not in enabled drivers build config 00:02:29.207 dma/skeleton: not in enabled drivers build config 00:02:29.207 net/af_packet: not in enabled drivers build config 00:02:29.207 net/af_xdp: not in enabled drivers build config 00:02:29.207 net/ark: not in enabled drivers build config 00:02:29.207 net/atlantic: not in enabled drivers build config 00:02:29.207 net/avp: not in enabled drivers build config 00:02:29.207 net/axgbe: not in enabled drivers build config 00:02:29.207 net/bnx2x: not in enabled drivers build config 00:02:29.207 net/bnxt: not in enabled drivers build config 00:02:29.207 net/bonding: not in enabled drivers build config 00:02:29.207 net/cnxk: not in enabled drivers build config 00:02:29.207 net/cpfl: not in enabled drivers build config 00:02:29.207 net/cxgbe: not in enabled drivers build config 00:02:29.207 net/dpaa: not in enabled drivers build config 00:02:29.207 net/dpaa2: not in enabled drivers build config 00:02:29.207 net/e1000: not in enabled drivers build config 00:02:29.207 net/ena: not in enabled drivers build config 00:02:29.207 net/enetc: not in enabled drivers build config 00:02:29.207 net/enetfec: not in enabled drivers build config 00:02:29.207 net/enic: not in enabled drivers build config 00:02:29.207 net/failsafe: not in enabled drivers build config 00:02:29.207 net/fm10k: not in enabled drivers build config 00:02:29.207 net/gve: not in enabled drivers build config 00:02:29.207 net/hinic: not in enabled drivers build config 00:02:29.207 net/hns3: not in enabled drivers build config 00:02:29.207 net/i40e: not in enabled drivers build config 00:02:29.207 net/iavf: not in enabled drivers build config 00:02:29.207 net/ice: not in enabled drivers build config 00:02:29.207 net/idpf: not in enabled drivers build config 00:02:29.207 net/igc: not in enabled drivers build config 00:02:29.207 net/ionic: not in enabled drivers build config 00:02:29.207 net/ipn3ke: not in enabled drivers build config 00:02:29.207 net/ixgbe: not in enabled drivers build config 00:02:29.207 net/mana: not in enabled drivers build config 00:02:29.207 net/memif: not in enabled drivers build config 00:02:29.207 net/mlx4: not in enabled drivers build config 00:02:29.207 net/mlx5: not in enabled drivers build config 00:02:29.207 net/mvneta: not in enabled drivers build config 00:02:29.207 net/mvpp2: not in enabled drivers build config 00:02:29.207 net/netvsc: not in enabled drivers build config 00:02:29.207 net/nfb: not in enabled drivers build config 00:02:29.207 net/nfp: not in enabled drivers build config 00:02:29.207 net/ngbe: not in enabled drivers build config 00:02:29.207 net/null: not in enabled drivers build config 00:02:29.207 net/octeontx: not in enabled drivers build config 00:02:29.207 net/octeon_ep: not in enabled drivers build config 00:02:29.208 net/pcap: not in enabled drivers build config 00:02:29.208 net/pfe: not in enabled drivers build config 00:02:29.208 net/qede: not in enabled drivers build config 00:02:29.208 net/ring: not in enabled drivers build config 00:02:29.208 net/sfc: not in enabled drivers build config 00:02:29.208 net/softnic: not in enabled drivers build config 00:02:29.208 net/tap: not in enabled drivers build config 00:02:29.208 net/thunderx: not in enabled drivers build config 00:02:29.208 net/txgbe: not in enabled drivers build config 00:02:29.208 net/vdev_netvsc: not in enabled drivers build config 00:02:29.208 net/vhost: not in enabled drivers build config 00:02:29.208 net/virtio: not in enabled drivers build config 00:02:29.208 net/vmxnet3: not in enabled drivers build config 00:02:29.208 raw/*: missing internal dependency, "rawdev" 00:02:29.208 crypto/armv8: not in enabled drivers build config 00:02:29.208 crypto/bcmfs: not in enabled drivers build config 00:02:29.208 crypto/caam_jr: not in enabled drivers build config 00:02:29.208 crypto/ccp: not in enabled drivers build config 00:02:29.208 crypto/cnxk: not in enabled drivers build config 00:02:29.208 crypto/dpaa_sec: not in enabled drivers build config 00:02:29.208 crypto/dpaa2_sec: not in enabled drivers build config 00:02:29.208 crypto/ipsec_mb: not in enabled drivers build config 00:02:29.208 crypto/mlx5: not in enabled drivers build config 00:02:29.208 crypto/mvsam: not in enabled drivers build config 00:02:29.208 crypto/nitrox: not in enabled drivers build config 00:02:29.208 crypto/null: not in enabled drivers build config 00:02:29.208 crypto/octeontx: not in enabled drivers build config 00:02:29.208 crypto/openssl: not in enabled drivers build config 00:02:29.208 crypto/scheduler: not in enabled drivers build config 00:02:29.208 crypto/uadk: not in enabled drivers build config 00:02:29.208 crypto/virtio: not in enabled drivers build config 00:02:29.208 compress/isal: not in enabled drivers build config 00:02:29.208 compress/mlx5: not in enabled drivers build config 00:02:29.208 compress/nitrox: not in enabled drivers build config 00:02:29.208 compress/octeontx: not in enabled drivers build config 00:02:29.208 compress/zlib: not in enabled drivers build config 00:02:29.208 regex/*: missing internal dependency, "regexdev" 00:02:29.208 ml/*: missing internal dependency, "mldev" 00:02:29.208 vdpa/ifc: not in enabled drivers build config 00:02:29.208 vdpa/mlx5: not in enabled drivers build config 00:02:29.208 vdpa/nfp: not in enabled drivers build config 00:02:29.208 vdpa/sfc: not in enabled drivers build config 00:02:29.208 event/*: missing internal dependency, "eventdev" 00:02:29.208 baseband/*: missing internal dependency, "bbdev" 00:02:29.208 gpu/*: missing internal dependency, "gpudev" 00:02:29.208 00:02:29.208 00:02:29.208 Build targets in project: 85 00:02:29.208 00:02:29.208 DPDK 24.03.0 00:02:29.208 00:02:29.208 User defined options 00:02:29.208 buildtype : debug 00:02:29.208 default_library : shared 00:02:29.208 libdir : lib 00:02:29.208 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:29.208 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:29.208 c_link_args : 00:02:29.208 cpu_instruction_set: native 00:02:29.208 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:29.208 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:29.208 enable_docs : false 00:02:29.208 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:29.208 enable_kmods : false 00:02:29.208 max_lcores : 128 00:02:29.208 tests : false 00:02:29.208 00:02:29.208 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.208 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:29.208 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:29.208 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.208 [3/268] Linking static target lib/librte_kvargs.a 00:02:29.208 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.208 [5/268] Linking static target lib/librte_log.a 00:02:29.208 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.208 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.208 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.208 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:29.208 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.208 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.208 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.208 [13/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.208 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.208 [15/268] Linking target lib/librte_log.so.24.1 00:02:29.208 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:29.208 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.466 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:29.466 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:29.466 [20/268] Linking static target lib/librte_telemetry.a 00:02:29.724 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:29.724 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:29.983 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.241 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:30.241 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.241 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.241 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.807 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.807 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.807 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.807 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:30.807 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:31.065 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.065 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:31.322 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.322 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.322 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.322 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.322 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.580 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.580 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.580 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.837 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:32.403 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:32.403 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:32.662 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:32.662 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:32.934 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:32.934 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:32.934 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:32.934 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:32.934 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:33.200 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:33.200 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:33.459 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:34.025 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:34.025 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:34.025 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:34.283 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:34.283 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:34.540 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:34.540 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:34.540 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:34.540 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:34.797 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:34.798 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:35.055 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:35.055 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:35.621 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:35.621 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:35.621 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:35.621 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:35.621 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:35.879 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:35.879 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:35.879 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:35.879 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:36.137 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:36.137 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:36.396 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:36.396 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:36.961 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:36.961 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:36.961 [84/268] Linking static target lib/librte_ring.a 00:02:36.961 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:37.219 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:37.219 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:37.219 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:37.219 [89/268] Linking static target lib/librte_rcu.a 00:02:37.219 [90/268] Linking static target lib/librte_eal.a 00:02:37.219 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.477 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.477 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.735 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:37.735 [95/268] Linking static target lib/librte_mempool.a 00:02:37.735 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.735 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:37.735 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.994 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:37.994 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:37.994 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.252 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.252 [103/268] Linking static target lib/librte_mbuf.a 00:02:38.252 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.510 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.510 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.510 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.768 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:38.768 [109/268] Linking static target lib/librte_meter.a 00:02:39.025 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.025 [111/268] Linking static target lib/librte_net.a 00:02:39.282 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.282 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.282 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.540 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.540 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.540 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.797 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.798 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.361 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.361 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.618 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.875 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.875 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:41.134 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:41.134 [126/268] Linking static target lib/librte_pci.a 00:02:41.134 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:41.134 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:41.391 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:41.391 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:41.391 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:41.391 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:41.391 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:41.647 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:41.647 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.647 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:41.647 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:41.647 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.647 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:41.647 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:41.647 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:41.904 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:41.904 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.904 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.904 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:41.904 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.496 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.496 [148/268] Linking static target lib/librte_cmdline.a 00:02:42.496 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:42.754 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.754 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:42.754 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.012 [153/268] Linking static target lib/librte_ethdev.a 00:02:43.012 [154/268] Linking static target lib/librte_timer.a 00:02:43.012 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.012 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.012 [157/268] Linking static target lib/librte_hash.a 00:02:43.012 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:43.271 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.271 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:43.271 [161/268] Linking static target lib/librte_compressdev.a 00:02:43.838 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:43.838 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.838 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:43.838 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.403 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:44.403 [167/268] Linking static target lib/librte_cryptodev.a 00:02:44.403 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:44.403 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.403 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.661 [171/268] Linking static target lib/librte_dmadev.a 00:02:44.661 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.661 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.661 [174/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.661 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.919 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.919 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:45.177 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:45.177 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:45.743 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:45.743 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:45.743 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.001 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:46.001 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:46.259 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:46.259 [186/268] Linking static target lib/librte_power.a 00:02:46.259 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:46.259 [188/268] Linking static target lib/librte_security.a 00:02:46.516 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:46.516 [190/268] Linking static target lib/librte_reorder.a 00:02:46.774 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.032 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.032 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.032 [194/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.290 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.548 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:47.548 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.548 [198/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.806 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:48.064 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:48.322 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:48.322 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:48.322 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:48.322 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.322 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:48.322 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:48.889 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:48.889 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:48.889 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:48.889 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:49.146 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.146 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:49.146 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:49.146 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:49.146 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:49.146 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.146 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.146 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:49.404 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:49.404 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.404 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.404 [222/268] Linking static target drivers/librte_bus_pci.a 00:02:49.404 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:49.404 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.404 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.404 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:49.404 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.970 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.534 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.534 [230/268] Linking target lib/librte_eal.so.24.1 00:02:50.534 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:50.806 [232/268] Linking target lib/librte_ring.so.24.1 00:02:50.806 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:50.806 [234/268] Linking target lib/librte_meter.so.24.1 00:02:50.806 [235/268] Linking target lib/librte_pci.so.24.1 00:02:50.806 [236/268] Linking target lib/librte_timer.so.24.1 00:02:50.806 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:50.806 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:50.806 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:50.806 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:50.806 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:50.806 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:51.070 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:51.070 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:51.070 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:51.070 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:51.070 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:51.070 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:51.070 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:51.328 [250/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:51.328 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:51.328 [252/268] Linking static target lib/librte_vhost.a 00:02:51.328 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:51.328 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:51.328 [255/268] Linking target lib/librte_net.so.24.1 00:02:51.328 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:51.585 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:51.585 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:51.585 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:51.585 [260/268] Linking target lib/librte_hash.so.24.1 00:02:51.585 [261/268] Linking target lib/librte_security.so.24.1 00:02:51.844 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:51.845 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.103 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:52.103 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:52.103 [266/268] Linking target lib/librte_power.so.24.1 00:02:52.671 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.929 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:52.929 INFO: autodetecting backend as ninja 00:02:52.929 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:54.301 CC lib/ut/ut.o 00:02:54.301 CC lib/ut_mock/mock.o 00:02:54.301 CC lib/log/log.o 00:02:54.301 CC lib/log/log_flags.o 00:02:54.301 CC lib/log/log_deprecated.o 00:02:54.301 LIB libspdk_ut_mock.a 00:02:54.301 SO libspdk_ut_mock.so.6.0 00:02:54.301 LIB libspdk_log.a 00:02:54.301 LIB libspdk_ut.a 00:02:54.301 SYMLINK libspdk_ut_mock.so 00:02:54.301 SO libspdk_ut.so.2.0 00:02:54.301 SO libspdk_log.so.7.0 00:02:54.559 SYMLINK libspdk_ut.so 00:02:54.559 SYMLINK libspdk_log.so 00:02:54.559 CC lib/ioat/ioat.o 00:02:54.559 CXX lib/trace_parser/trace.o 00:02:54.559 CC lib/util/base64.o 00:02:54.559 CC lib/util/bit_array.o 00:02:54.559 CC lib/dma/dma.o 00:02:54.559 CC lib/util/cpuset.o 00:02:54.559 CC lib/util/crc16.o 00:02:54.559 CC lib/util/crc32.o 00:02:54.817 CC lib/util/crc32c.o 00:02:54.817 CC lib/vfio_user/host/vfio_user_pci.o 00:02:54.817 CC lib/vfio_user/host/vfio_user.o 00:02:54.817 LIB libspdk_dma.a 00:02:54.817 CC lib/util/crc32_ieee.o 00:02:54.817 CC lib/util/crc64.o 00:02:54.817 CC lib/util/dif.o 00:02:54.817 SO libspdk_dma.so.4.0 00:02:55.075 CC lib/util/fd.o 00:02:55.075 SYMLINK libspdk_dma.so 00:02:55.075 CC lib/util/file.o 00:02:55.075 CC lib/util/hexlify.o 00:02:55.075 CC lib/util/iov.o 00:02:55.075 LIB libspdk_ioat.a 00:02:55.075 LIB libspdk_vfio_user.a 00:02:55.075 SO libspdk_ioat.so.7.0 00:02:55.075 CC lib/util/math.o 00:02:55.075 CC lib/util/pipe.o 00:02:55.075 SO libspdk_vfio_user.so.5.0 00:02:55.332 SYMLINK libspdk_ioat.so 00:02:55.332 CC lib/util/strerror_tls.o 00:02:55.333 CC lib/util/string.o 00:02:55.333 CC lib/util/uuid.o 00:02:55.333 CC lib/util/fd_group.o 00:02:55.333 SYMLINK libspdk_vfio_user.so 00:02:55.333 CC lib/util/xor.o 00:02:55.333 CC lib/util/zipf.o 00:02:55.591 LIB libspdk_util.a 00:02:55.591 SO libspdk_util.so.9.1 00:02:55.850 SYMLINK libspdk_util.so 00:02:56.109 LIB libspdk_trace_parser.a 00:02:56.109 SO libspdk_trace_parser.so.5.0 00:02:56.109 CC lib/rdma_provider/common.o 00:02:56.109 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:56.109 CC lib/rdma_utils/rdma_utils.o 00:02:56.109 CC lib/idxd/idxd.o 00:02:56.109 CC lib/idxd/idxd_user.o 00:02:56.109 CC lib/vmd/vmd.o 00:02:56.109 CC lib/conf/conf.o 00:02:56.109 CC lib/env_dpdk/env.o 00:02:56.109 CC lib/json/json_parse.o 00:02:56.368 SYMLINK libspdk_trace_parser.so 00:02:56.368 CC lib/env_dpdk/memory.o 00:02:56.368 CC lib/env_dpdk/pci.o 00:02:56.368 LIB libspdk_rdma_utils.a 00:02:56.368 SO libspdk_rdma_utils.so.1.0 00:02:56.368 LIB libspdk_rdma_provider.a 00:02:56.368 SYMLINK libspdk_rdma_utils.so 00:02:56.368 SO libspdk_rdma_provider.so.6.0 00:02:56.626 CC lib/idxd/idxd_kernel.o 00:02:56.626 LIB libspdk_conf.a 00:02:56.626 CC lib/json/json_util.o 00:02:56.626 CC lib/env_dpdk/init.o 00:02:56.626 SO libspdk_conf.so.6.0 00:02:56.626 SYMLINK libspdk_rdma_provider.so 00:02:56.626 SYMLINK libspdk_conf.so 00:02:56.626 CC lib/json/json_write.o 00:02:56.626 CC lib/vmd/led.o 00:02:56.626 CC lib/env_dpdk/threads.o 00:02:56.884 CC lib/env_dpdk/pci_ioat.o 00:02:56.884 CC lib/env_dpdk/pci_virtio.o 00:02:56.884 CC lib/env_dpdk/pci_vmd.o 00:02:56.884 CC lib/env_dpdk/pci_idxd.o 00:02:56.884 LIB libspdk_vmd.a 00:02:56.884 LIB libspdk_json.a 00:02:56.884 SO libspdk_vmd.so.6.0 00:02:56.884 SO libspdk_json.so.6.0 00:02:56.884 CC lib/env_dpdk/pci_event.o 00:02:56.884 LIB libspdk_idxd.a 00:02:57.171 SYMLINK libspdk_json.so 00:02:57.171 SYMLINK libspdk_vmd.so 00:02:57.171 CC lib/env_dpdk/sigbus_handler.o 00:02:57.171 SO libspdk_idxd.so.12.0 00:02:57.171 CC lib/env_dpdk/pci_dpdk.o 00:02:57.171 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:57.171 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.171 SYMLINK libspdk_idxd.so 00:02:57.171 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.171 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.172 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.172 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.765 LIB libspdk_jsonrpc.a 00:02:57.765 SO libspdk_jsonrpc.so.6.0 00:02:57.765 SYMLINK libspdk_jsonrpc.so 00:02:58.022 CC lib/rpc/rpc.o 00:02:58.022 LIB libspdk_env_dpdk.a 00:02:58.280 LIB libspdk_rpc.a 00:02:58.280 SO libspdk_env_dpdk.so.14.1 00:02:58.280 SO libspdk_rpc.so.6.0 00:02:58.280 SYMLINK libspdk_rpc.so 00:02:58.280 SYMLINK libspdk_env_dpdk.so 00:02:58.537 CC lib/notify/notify.o 00:02:58.537 CC lib/notify/notify_rpc.o 00:02:58.537 CC lib/trace/trace.o 00:02:58.537 CC lib/trace/trace_flags.o 00:02:58.537 CC lib/trace/trace_rpc.o 00:02:58.537 CC lib/keyring/keyring.o 00:02:58.537 CC lib/keyring/keyring_rpc.o 00:02:58.794 LIB libspdk_notify.a 00:02:58.794 SO libspdk_notify.so.6.0 00:02:59.051 LIB libspdk_trace.a 00:02:59.051 LIB libspdk_keyring.a 00:02:59.051 SYMLINK libspdk_notify.so 00:02:59.051 SO libspdk_trace.so.10.0 00:02:59.051 SO libspdk_keyring.so.1.0 00:02:59.051 SYMLINK libspdk_trace.so 00:02:59.051 SYMLINK libspdk_keyring.so 00:02:59.307 CC lib/thread/thread.o 00:02:59.307 CC lib/sock/sock.o 00:02:59.307 CC lib/thread/iobuf.o 00:02:59.307 CC lib/sock/sock_rpc.o 00:02:59.872 LIB libspdk_sock.a 00:02:59.872 SO libspdk_sock.so.10.0 00:02:59.872 SYMLINK libspdk_sock.so 00:03:00.129 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:00.129 CC lib/nvme/nvme_ctrlr.o 00:03:00.129 CC lib/nvme/nvme_ns_cmd.o 00:03:00.129 CC lib/nvme/nvme_ns.o 00:03:00.129 CC lib/nvme/nvme_fabric.o 00:03:00.129 CC lib/nvme/nvme_pcie_common.o 00:03:00.129 CC lib/nvme/nvme_pcie.o 00:03:00.129 CC lib/nvme/nvme_qpair.o 00:03:00.129 CC lib/nvme/nvme.o 00:03:01.060 LIB libspdk_thread.a 00:03:01.060 SO libspdk_thread.so.10.1 00:03:01.060 SYMLINK libspdk_thread.so 00:03:01.060 CC lib/nvme/nvme_quirks.o 00:03:01.317 CC lib/nvme/nvme_transport.o 00:03:01.573 CC lib/nvme/nvme_discovery.o 00:03:01.573 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:01.830 CC lib/accel/accel.o 00:03:01.830 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:01.830 CC lib/blob/blobstore.o 00:03:01.830 CC lib/init/json_config.o 00:03:01.830 CC lib/init/subsystem.o 00:03:02.088 CC lib/blob/request.o 00:03:02.088 CC lib/blob/zeroes.o 00:03:02.088 CC lib/init/subsystem_rpc.o 00:03:02.345 CC lib/init/rpc.o 00:03:02.345 CC lib/accel/accel_rpc.o 00:03:02.345 CC lib/blob/blob_bs_dev.o 00:03:02.619 CC lib/accel/accel_sw.o 00:03:02.619 LIB libspdk_init.a 00:03:02.619 CC lib/nvme/nvme_tcp.o 00:03:02.619 CC lib/nvme/nvme_opal.o 00:03:02.619 SO libspdk_init.so.5.0 00:03:02.619 CC lib/nvme/nvme_io_msg.o 00:03:02.879 CC lib/nvme/nvme_poll_group.o 00:03:02.879 SYMLINK libspdk_init.so 00:03:02.879 CC lib/virtio/virtio.o 00:03:02.879 CC lib/virtio/virtio_vhost_user.o 00:03:03.138 CC lib/event/app.o 00:03:03.138 CC lib/virtio/virtio_vfio_user.o 00:03:03.396 CC lib/virtio/virtio_pci.o 00:03:03.396 CC lib/nvme/nvme_zns.o 00:03:03.396 CC lib/nvme/nvme_stubs.o 00:03:03.653 CC lib/event/reactor.o 00:03:03.653 LIB libspdk_accel.a 00:03:03.653 CC lib/event/log_rpc.o 00:03:03.653 SO libspdk_accel.so.15.1 00:03:03.653 SYMLINK libspdk_accel.so 00:03:03.653 CC lib/event/app_rpc.o 00:03:03.912 LIB libspdk_virtio.a 00:03:03.912 SO libspdk_virtio.so.7.0 00:03:03.912 CC lib/event/scheduler_static.o 00:03:04.169 SYMLINK libspdk_virtio.so 00:03:04.169 CC lib/nvme/nvme_auth.o 00:03:04.169 CC lib/nvme/nvme_cuse.o 00:03:04.169 CC lib/bdev/bdev.o 00:03:04.169 CC lib/bdev/bdev_rpc.o 00:03:04.169 CC lib/bdev/bdev_zone.o 00:03:04.169 CC lib/bdev/part.o 00:03:04.426 LIB libspdk_event.a 00:03:04.426 SO libspdk_event.so.14.0 00:03:04.426 CC lib/nvme/nvme_rdma.o 00:03:04.426 SYMLINK libspdk_event.so 00:03:04.683 CC lib/bdev/scsi_nvme.o 00:03:06.054 LIB libspdk_nvme.a 00:03:06.311 SO libspdk_nvme.so.13.1 00:03:06.311 LIB libspdk_blob.a 00:03:06.311 SO libspdk_blob.so.11.0 00:03:06.570 SYMLINK libspdk_blob.so 00:03:06.570 SYMLINK libspdk_nvme.so 00:03:06.827 CC lib/blobfs/blobfs.o 00:03:06.827 CC lib/blobfs/tree.o 00:03:06.827 CC lib/lvol/lvol.o 00:03:07.391 LIB libspdk_bdev.a 00:03:07.391 SO libspdk_bdev.so.15.1 00:03:07.391 SYMLINK libspdk_bdev.so 00:03:07.649 CC lib/nbd/nbd.o 00:03:07.649 CC lib/nbd/nbd_rpc.o 00:03:07.649 CC lib/scsi/dev.o 00:03:07.649 CC lib/ftl/ftl_core.o 00:03:07.649 CC lib/scsi/lun.o 00:03:07.649 CC lib/ftl/ftl_init.o 00:03:07.649 CC lib/nvmf/ctrlr.o 00:03:07.649 CC lib/ublk/ublk.o 00:03:07.907 LIB libspdk_lvol.a 00:03:07.907 LIB libspdk_blobfs.a 00:03:07.907 SO libspdk_lvol.so.10.0 00:03:07.907 SO libspdk_blobfs.so.10.0 00:03:07.907 SYMLINK libspdk_lvol.so 00:03:07.907 CC lib/ftl/ftl_layout.o 00:03:07.907 SYMLINK libspdk_blobfs.so 00:03:07.907 CC lib/scsi/port.o 00:03:07.907 CC lib/ublk/ublk_rpc.o 00:03:07.907 CC lib/scsi/scsi.o 00:03:08.165 CC lib/scsi/scsi_bdev.o 00:03:08.165 CC lib/scsi/scsi_pr.o 00:03:08.165 CC lib/nvmf/ctrlr_discovery.o 00:03:08.165 LIB libspdk_nbd.a 00:03:08.165 SO libspdk_nbd.so.7.0 00:03:08.165 CC lib/scsi/scsi_rpc.o 00:03:08.423 CC lib/scsi/task.o 00:03:08.423 CC lib/ftl/ftl_debug.o 00:03:08.423 SYMLINK libspdk_nbd.so 00:03:08.423 CC lib/ftl/ftl_io.o 00:03:08.423 CC lib/ftl/ftl_sb.o 00:03:08.423 LIB libspdk_ublk.a 00:03:08.680 CC lib/nvmf/ctrlr_bdev.o 00:03:08.680 CC lib/ftl/ftl_l2p.o 00:03:08.680 SO libspdk_ublk.so.3.0 00:03:08.680 CC lib/ftl/ftl_l2p_flat.o 00:03:08.680 SYMLINK libspdk_ublk.so 00:03:08.680 CC lib/ftl/ftl_nv_cache.o 00:03:08.680 CC lib/ftl/ftl_band.o 00:03:08.680 CC lib/nvmf/subsystem.o 00:03:08.680 CC lib/ftl/ftl_band_ops.o 00:03:08.680 CC lib/nvmf/nvmf.o 00:03:08.939 CC lib/nvmf/nvmf_rpc.o 00:03:08.939 LIB libspdk_scsi.a 00:03:08.939 CC lib/ftl/ftl_writer.o 00:03:08.939 SO libspdk_scsi.so.9.0 00:03:08.939 CC lib/nvmf/transport.o 00:03:09.197 SYMLINK libspdk_scsi.so 00:03:09.197 CC lib/nvmf/tcp.o 00:03:09.454 CC lib/ftl/ftl_rq.o 00:03:09.712 CC lib/nvmf/stubs.o 00:03:09.713 CC lib/iscsi/conn.o 00:03:09.713 CC lib/iscsi/init_grp.o 00:03:09.713 CC lib/iscsi/iscsi.o 00:03:09.713 CC lib/iscsi/md5.o 00:03:09.713 CC lib/iscsi/param.o 00:03:10.322 CC lib/ftl/ftl_reloc.o 00:03:10.322 CC lib/vhost/vhost.o 00:03:10.322 CC lib/vhost/vhost_rpc.o 00:03:10.322 CC lib/iscsi/portal_grp.o 00:03:10.322 CC lib/nvmf/mdns_server.o 00:03:10.322 CC lib/vhost/vhost_scsi.o 00:03:10.596 CC lib/ftl/ftl_l2p_cache.o 00:03:10.596 CC lib/vhost/vhost_blk.o 00:03:10.596 CC lib/vhost/rte_vhost_user.o 00:03:10.596 CC lib/iscsi/tgt_node.o 00:03:11.159 CC lib/ftl/ftl_p2l.o 00:03:11.159 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.159 CC lib/iscsi/iscsi_subsystem.o 00:03:11.415 CC lib/iscsi/iscsi_rpc.o 00:03:11.415 CC lib/iscsi/task.o 00:03:11.415 CC lib/nvmf/rdma.o 00:03:11.415 CC lib/nvmf/auth.o 00:03:11.671 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.671 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.671 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.671 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.671 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:11.928 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:11.928 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:11.928 LIB libspdk_iscsi.a 00:03:11.928 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.185 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.185 LIB libspdk_vhost.a 00:03:12.185 SO libspdk_iscsi.so.8.0 00:03:12.185 SO libspdk_vhost.so.8.0 00:03:12.185 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.185 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.185 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.185 CC lib/ftl/utils/ftl_conf.o 00:03:12.481 SYMLINK libspdk_vhost.so 00:03:12.481 CC lib/ftl/utils/ftl_md.o 00:03:12.481 CC lib/ftl/utils/ftl_mempool.o 00:03:12.481 SYMLINK libspdk_iscsi.so 00:03:12.481 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.481 CC lib/ftl/utils/ftl_property.o 00:03:12.736 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.736 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.736 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.736 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.736 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.992 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.992 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:12.992 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.992 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.992 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.992 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.992 CC lib/ftl/base/ftl_base_dev.o 00:03:13.250 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.250 CC lib/ftl/ftl_trace.o 00:03:13.506 LIB libspdk_ftl.a 00:03:14.070 LIB libspdk_nvmf.a 00:03:14.070 SO libspdk_ftl.so.9.0 00:03:14.070 SO libspdk_nvmf.so.19.0 00:03:14.327 SYMLINK libspdk_nvmf.so 00:03:14.327 SYMLINK libspdk_ftl.so 00:03:14.893 CC module/env_dpdk/env_dpdk_rpc.o 00:03:14.893 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:14.893 CC module/keyring/file/keyring.o 00:03:14.893 CC module/accel/error/accel_error.o 00:03:14.893 CC module/blob/bdev/blob_bdev.o 00:03:14.893 CC module/sock/posix/posix.o 00:03:14.893 CC module/accel/ioat/accel_ioat.o 00:03:14.893 CC module/accel/dsa/accel_dsa.o 00:03:14.893 CC module/accel/iaa/accel_iaa.o 00:03:14.893 CC module/keyring/linux/keyring.o 00:03:14.893 LIB libspdk_env_dpdk_rpc.a 00:03:14.893 SO libspdk_env_dpdk_rpc.so.6.0 00:03:15.151 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.151 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.151 CC module/keyring/file/keyring_rpc.o 00:03:15.151 CC module/keyring/linux/keyring_rpc.o 00:03:15.151 CC module/accel/error/accel_error_rpc.o 00:03:15.151 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.151 LIB libspdk_scheduler_dynamic.a 00:03:15.409 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.409 SO libspdk_scheduler_dynamic.so.4.0 00:03:15.409 LIB libspdk_accel_iaa.a 00:03:15.409 SO libspdk_accel_iaa.so.3.0 00:03:15.409 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.409 SYMLINK libspdk_scheduler_dynamic.so 00:03:15.409 LIB libspdk_blob_bdev.a 00:03:15.409 LIB libspdk_keyring_file.a 00:03:15.409 LIB libspdk_accel_dsa.a 00:03:15.409 SO libspdk_blob_bdev.so.11.0 00:03:15.409 LIB libspdk_keyring_linux.a 00:03:15.409 SO libspdk_keyring_file.so.1.0 00:03:15.409 SO libspdk_accel_dsa.so.5.0 00:03:15.409 LIB libspdk_accel_ioat.a 00:03:15.409 LIB libspdk_accel_error.a 00:03:15.409 SO libspdk_keyring_linux.so.1.0 00:03:15.409 SYMLINK libspdk_accel_iaa.so 00:03:15.409 SYMLINK libspdk_blob_bdev.so 00:03:15.409 SO libspdk_accel_error.so.2.0 00:03:15.409 SO libspdk_accel_ioat.so.6.0 00:03:15.409 SYMLINK libspdk_keyring_file.so 00:03:15.409 SYMLINK libspdk_accel_dsa.so 00:03:15.409 SYMLINK libspdk_keyring_linux.so 00:03:15.668 SYMLINK libspdk_accel_ioat.so 00:03:15.668 SYMLINK libspdk_accel_error.so 00:03:15.668 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.668 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.668 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:15.926 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:15.926 LIB libspdk_scheduler_gscheduler.a 00:03:15.926 CC module/bdev/error/vbdev_error.o 00:03:15.926 CC module/bdev/gpt/gpt.o 00:03:15.926 CC module/bdev/malloc/bdev_malloc.o 00:03:15.926 CC module/bdev/delay/vbdev_delay.o 00:03:15.926 CC module/bdev/null/bdev_null.o 00:03:15.926 CC module/bdev/lvol/vbdev_lvol.o 00:03:15.926 CC module/blobfs/bdev/blobfs_bdev.o 00:03:15.926 SO libspdk_scheduler_gscheduler.so.4.0 00:03:15.926 SYMLINK libspdk_scheduler_gscheduler.so 00:03:15.926 LIB libspdk_sock_posix.a 00:03:15.926 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:15.926 CC module/bdev/nvme/bdev_nvme.o 00:03:16.184 SO libspdk_sock_posix.so.6.0 00:03:16.184 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.184 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.184 SYMLINK libspdk_sock_posix.so 00:03:16.184 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:16.184 LIB libspdk_blobfs_bdev.a 00:03:16.184 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.184 SO libspdk_blobfs_bdev.so.6.0 00:03:16.184 LIB libspdk_bdev_error.a 00:03:16.442 SO libspdk_bdev_error.so.6.0 00:03:16.442 CC module/bdev/null/bdev_null_rpc.o 00:03:16.442 SYMLINK libspdk_blobfs_bdev.so 00:03:16.442 SYMLINK libspdk_bdev_error.so 00:03:16.442 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:16.442 LIB libspdk_bdev_delay.a 00:03:16.442 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:16.442 SO libspdk_bdev_delay.so.6.0 00:03:16.699 CC module/bdev/nvme/nvme_rpc.o 00:03:16.699 SYMLINK libspdk_bdev_delay.so 00:03:16.699 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.699 LIB libspdk_bdev_malloc.a 00:03:16.699 LIB libspdk_bdev_gpt.a 00:03:16.699 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.699 LIB libspdk_bdev_null.a 00:03:16.699 LIB libspdk_bdev_lvol.a 00:03:16.699 CC module/bdev/raid/bdev_raid.o 00:03:16.699 SO libspdk_bdev_gpt.so.6.0 00:03:16.699 SO libspdk_bdev_malloc.so.6.0 00:03:16.699 SO libspdk_bdev_null.so.6.0 00:03:16.699 SO libspdk_bdev_lvol.so.6.0 00:03:16.958 SYMLINK libspdk_bdev_gpt.so 00:03:16.958 SYMLINK libspdk_bdev_malloc.so 00:03:16.958 CC module/bdev/raid/bdev_raid_rpc.o 00:03:16.958 SYMLINK libspdk_bdev_null.so 00:03:16.958 CC module/bdev/nvme/vbdev_opal.o 00:03:16.958 SYMLINK libspdk_bdev_lvol.so 00:03:16.958 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:16.958 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:17.215 CC module/bdev/split/vbdev_split.o 00:03:17.215 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.215 LIB libspdk_bdev_passthru.a 00:03:17.215 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:17.215 SO libspdk_bdev_passthru.so.6.0 00:03:17.215 CC module/bdev/aio/bdev_aio.o 00:03:17.473 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.473 SYMLINK libspdk_bdev_passthru.so 00:03:17.473 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:17.473 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.473 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.473 CC module/bdev/raid/raid0.o 00:03:17.731 CC module/bdev/raid/raid1.o 00:03:17.731 LIB libspdk_bdev_zone_block.a 00:03:17.731 SO libspdk_bdev_zone_block.so.6.0 00:03:17.731 CC module/bdev/raid/concat.o 00:03:17.731 LIB libspdk_bdev_split.a 00:03:17.731 CC module/bdev/ftl/bdev_ftl.o 00:03:17.731 SO libspdk_bdev_split.so.6.0 00:03:17.731 SYMLINK libspdk_bdev_zone_block.so 00:03:17.731 LIB libspdk_bdev_aio.a 00:03:18.006 SYMLINK libspdk_bdev_split.so 00:03:18.006 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:18.006 SO libspdk_bdev_aio.so.6.0 00:03:18.006 CC module/bdev/iscsi/bdev_iscsi.o 00:03:18.006 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:18.006 SYMLINK libspdk_bdev_aio.so 00:03:18.006 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:18.006 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:18.006 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:18.285 LIB libspdk_bdev_raid.a 00:03:18.285 SO libspdk_bdev_raid.so.6.0 00:03:18.285 LIB libspdk_bdev_ftl.a 00:03:18.285 SO libspdk_bdev_ftl.so.6.0 00:03:18.285 SYMLINK libspdk_bdev_raid.so 00:03:18.543 SYMLINK libspdk_bdev_ftl.so 00:03:18.543 LIB libspdk_bdev_iscsi.a 00:03:18.543 SO libspdk_bdev_iscsi.so.6.0 00:03:18.543 SYMLINK libspdk_bdev_iscsi.so 00:03:18.801 LIB libspdk_bdev_virtio.a 00:03:18.801 SO libspdk_bdev_virtio.so.6.0 00:03:19.058 SYMLINK libspdk_bdev_virtio.so 00:03:19.623 LIB libspdk_bdev_nvme.a 00:03:19.623 SO libspdk_bdev_nvme.so.7.0 00:03:19.880 SYMLINK libspdk_bdev_nvme.so 00:03:20.138 CC module/event/subsystems/iobuf/iobuf.o 00:03:20.138 CC module/event/subsystems/vmd/vmd.o 00:03:20.138 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:20.138 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:20.138 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:20.138 CC module/event/subsystems/sock/sock.o 00:03:20.138 CC module/event/subsystems/keyring/keyring.o 00:03:20.138 CC module/event/subsystems/scheduler/scheduler.o 00:03:20.396 LIB libspdk_event_keyring.a 00:03:20.396 LIB libspdk_event_vhost_blk.a 00:03:20.396 LIB libspdk_event_scheduler.a 00:03:20.396 LIB libspdk_event_vmd.a 00:03:20.396 SO libspdk_event_keyring.so.1.0 00:03:20.396 LIB libspdk_event_sock.a 00:03:20.396 SO libspdk_event_vhost_blk.so.3.0 00:03:20.396 LIB libspdk_event_iobuf.a 00:03:20.655 SO libspdk_event_scheduler.so.4.0 00:03:20.655 SO libspdk_event_vmd.so.6.0 00:03:20.655 SO libspdk_event_sock.so.5.0 00:03:20.655 SO libspdk_event_iobuf.so.3.0 00:03:20.655 SYMLINK libspdk_event_vhost_blk.so 00:03:20.655 SYMLINK libspdk_event_keyring.so 00:03:20.655 SYMLINK libspdk_event_scheduler.so 00:03:20.655 SYMLINK libspdk_event_sock.so 00:03:20.655 SYMLINK libspdk_event_vmd.so 00:03:20.655 SYMLINK libspdk_event_iobuf.so 00:03:20.919 CC module/event/subsystems/accel/accel.o 00:03:21.176 LIB libspdk_event_accel.a 00:03:21.176 SO libspdk_event_accel.so.6.0 00:03:21.176 SYMLINK libspdk_event_accel.so 00:03:21.433 CC module/event/subsystems/bdev/bdev.o 00:03:21.691 LIB libspdk_event_bdev.a 00:03:21.691 SO libspdk_event_bdev.so.6.0 00:03:21.691 SYMLINK libspdk_event_bdev.so 00:03:21.950 CC module/event/subsystems/nbd/nbd.o 00:03:21.950 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:21.950 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:21.950 CC module/event/subsystems/ublk/ublk.o 00:03:21.950 CC module/event/subsystems/scsi/scsi.o 00:03:22.208 LIB libspdk_event_ublk.a 00:03:22.208 LIB libspdk_event_nbd.a 00:03:22.208 LIB libspdk_event_scsi.a 00:03:22.208 SO libspdk_event_ublk.so.3.0 00:03:22.208 SO libspdk_event_nbd.so.6.0 00:03:22.208 SO libspdk_event_scsi.so.6.0 00:03:22.466 LIB libspdk_event_nvmf.a 00:03:22.466 SYMLINK libspdk_event_nbd.so 00:03:22.466 SYMLINK libspdk_event_ublk.so 00:03:22.466 SYMLINK libspdk_event_scsi.so 00:03:22.466 SO libspdk_event_nvmf.so.6.0 00:03:22.466 SYMLINK libspdk_event_nvmf.so 00:03:22.725 CC module/event/subsystems/iscsi/iscsi.o 00:03:22.725 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:22.725 LIB libspdk_event_iscsi.a 00:03:22.725 LIB libspdk_event_vhost_scsi.a 00:03:22.983 SO libspdk_event_iscsi.so.6.0 00:03:22.983 SO libspdk_event_vhost_scsi.so.3.0 00:03:22.983 SYMLINK libspdk_event_iscsi.so 00:03:22.983 SYMLINK libspdk_event_vhost_scsi.so 00:03:22.983 SO libspdk.so.6.0 00:03:22.983 SYMLINK libspdk.so 00:03:23.241 CXX app/trace/trace.o 00:03:23.241 CC app/trace_record/trace_record.o 00:03:23.241 TEST_HEADER include/spdk/accel.h 00:03:23.241 TEST_HEADER include/spdk/accel_module.h 00:03:23.241 TEST_HEADER include/spdk/assert.h 00:03:23.241 TEST_HEADER include/spdk/barrier.h 00:03:23.241 TEST_HEADER include/spdk/base64.h 00:03:23.241 TEST_HEADER include/spdk/bdev.h 00:03:23.241 TEST_HEADER include/spdk/bdev_module.h 00:03:23.241 TEST_HEADER include/spdk/bdev_zone.h 00:03:23.241 TEST_HEADER include/spdk/bit_array.h 00:03:23.500 TEST_HEADER include/spdk/bit_pool.h 00:03:23.500 TEST_HEADER include/spdk/blob_bdev.h 00:03:23.500 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:23.500 TEST_HEADER include/spdk/blobfs.h 00:03:23.500 TEST_HEADER include/spdk/blob.h 00:03:23.500 TEST_HEADER include/spdk/conf.h 00:03:23.500 TEST_HEADER include/spdk/config.h 00:03:23.500 TEST_HEADER include/spdk/cpuset.h 00:03:23.500 TEST_HEADER include/spdk/crc16.h 00:03:23.500 TEST_HEADER include/spdk/crc32.h 00:03:23.500 TEST_HEADER include/spdk/crc64.h 00:03:23.500 TEST_HEADER include/spdk/dif.h 00:03:23.500 TEST_HEADER include/spdk/dma.h 00:03:23.500 TEST_HEADER include/spdk/endian.h 00:03:23.500 TEST_HEADER include/spdk/env_dpdk.h 00:03:23.500 TEST_HEADER include/spdk/env.h 00:03:23.500 TEST_HEADER include/spdk/event.h 00:03:23.500 TEST_HEADER include/spdk/fd_group.h 00:03:23.500 TEST_HEADER include/spdk/fd.h 00:03:23.500 CC app/nvmf_tgt/nvmf_main.o 00:03:23.500 TEST_HEADER include/spdk/file.h 00:03:23.500 TEST_HEADER include/spdk/ftl.h 00:03:23.500 TEST_HEADER include/spdk/gpt_spec.h 00:03:23.500 TEST_HEADER include/spdk/hexlify.h 00:03:23.500 TEST_HEADER include/spdk/histogram_data.h 00:03:23.500 TEST_HEADER include/spdk/idxd.h 00:03:23.500 CC test/thread/poller_perf/poller_perf.o 00:03:23.500 TEST_HEADER include/spdk/idxd_spec.h 00:03:23.500 TEST_HEADER include/spdk/init.h 00:03:23.500 CC examples/util/zipf/zipf.o 00:03:23.500 CC examples/ioat/perf/perf.o 00:03:23.500 TEST_HEADER include/spdk/ioat.h 00:03:23.500 TEST_HEADER include/spdk/ioat_spec.h 00:03:23.500 TEST_HEADER include/spdk/iscsi_spec.h 00:03:23.500 TEST_HEADER include/spdk/json.h 00:03:23.500 TEST_HEADER include/spdk/jsonrpc.h 00:03:23.500 TEST_HEADER include/spdk/keyring.h 00:03:23.500 TEST_HEADER include/spdk/keyring_module.h 00:03:23.500 TEST_HEADER include/spdk/likely.h 00:03:23.500 TEST_HEADER include/spdk/log.h 00:03:23.500 TEST_HEADER include/spdk/lvol.h 00:03:23.500 TEST_HEADER include/spdk/memory.h 00:03:23.500 TEST_HEADER include/spdk/mmio.h 00:03:23.500 TEST_HEADER include/spdk/nbd.h 00:03:23.500 TEST_HEADER include/spdk/notify.h 00:03:23.500 TEST_HEADER include/spdk/nvme.h 00:03:23.500 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.500 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.500 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.500 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.500 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.500 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.500 CC test/app/bdev_svc/bdev_svc.o 00:03:23.500 CC test/dma/test_dma/test_dma.o 00:03:23.500 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.500 TEST_HEADER include/spdk/nvmf.h 00:03:23.500 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.500 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.500 TEST_HEADER include/spdk/opal.h 00:03:23.500 TEST_HEADER include/spdk/opal_spec.h 00:03:23.500 TEST_HEADER include/spdk/pci_ids.h 00:03:23.500 TEST_HEADER include/spdk/pipe.h 00:03:23.500 TEST_HEADER include/spdk/queue.h 00:03:23.500 TEST_HEADER include/spdk/reduce.h 00:03:23.500 TEST_HEADER include/spdk/rpc.h 00:03:23.500 TEST_HEADER include/spdk/scheduler.h 00:03:23.500 TEST_HEADER include/spdk/scsi.h 00:03:23.500 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.500 TEST_HEADER include/spdk/sock.h 00:03:23.500 TEST_HEADER include/spdk/stdinc.h 00:03:23.500 TEST_HEADER include/spdk/string.h 00:03:23.500 TEST_HEADER include/spdk/thread.h 00:03:23.500 TEST_HEADER include/spdk/trace.h 00:03:23.500 TEST_HEADER include/spdk/trace_parser.h 00:03:23.500 TEST_HEADER include/spdk/tree.h 00:03:23.500 TEST_HEADER include/spdk/ublk.h 00:03:23.758 TEST_HEADER include/spdk/util.h 00:03:23.758 TEST_HEADER include/spdk/uuid.h 00:03:23.758 TEST_HEADER include/spdk/version.h 00:03:23.758 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.758 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.758 TEST_HEADER include/spdk/vhost.h 00:03:23.758 TEST_HEADER include/spdk/vmd.h 00:03:23.758 TEST_HEADER include/spdk/xor.h 00:03:23.758 TEST_HEADER include/spdk/zipf.h 00:03:23.758 CXX test/cpp_headers/accel.o 00:03:23.758 LINK poller_perf 00:03:23.758 LINK nvmf_tgt 00:03:23.758 LINK spdk_trace_record 00:03:23.758 LINK zipf 00:03:23.758 LINK ioat_perf 00:03:23.758 LINK bdev_svc 00:03:24.017 CXX test/cpp_headers/accel_module.o 00:03:24.017 CXX test/cpp_headers/assert.o 00:03:24.017 LINK spdk_trace 00:03:24.017 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:24.334 CC test/app/histogram_perf/histogram_perf.o 00:03:24.334 LINK test_dma 00:03:24.334 CC examples/ioat/verify/verify.o 00:03:24.334 CC test/app/jsoncat/jsoncat.o 00:03:24.334 CXX test/cpp_headers/barrier.o 00:03:24.334 LINK histogram_perf 00:03:24.608 LINK jsoncat 00:03:24.608 CC test/event/event_perf/event_perf.o 00:03:24.608 LINK verify 00:03:24.608 CC app/iscsi_tgt/iscsi_tgt.o 00:03:24.608 CXX test/cpp_headers/base64.o 00:03:24.608 LINK nvme_fuzz 00:03:24.866 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.866 CC test/event/reactor/reactor.o 00:03:24.866 CC test/event/reactor_perf/reactor_perf.o 00:03:24.866 LINK event_perf 00:03:24.866 CXX test/cpp_headers/bdev.o 00:03:24.866 CXX test/cpp_headers/bdev_module.o 00:03:25.124 LINK reactor 00:03:25.124 LINK reactor_perf 00:03:25.124 CC test/app/stub/stub.o 00:03:25.381 LINK iscsi_tgt 00:03:25.381 CC test/env/vtophys/vtophys.o 00:03:25.381 CXX test/cpp_headers/bdev_zone.o 00:03:25.381 CXX test/cpp_headers/bit_array.o 00:03:25.381 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:25.381 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:25.638 LINK stub 00:03:25.638 LINK mem_callbacks 00:03:25.638 LINK vtophys 00:03:25.638 CC test/event/app_repeat/app_repeat.o 00:03:25.638 LINK env_dpdk_post_init 00:03:25.638 CXX test/cpp_headers/bit_pool.o 00:03:25.896 CC app/spdk_lspci/spdk_lspci.o 00:03:25.896 CC app/spdk_tgt/spdk_tgt.o 00:03:25.896 LINK app_repeat 00:03:25.896 CC app/spdk_nvme_perf/perf.o 00:03:25.896 CC app/spdk_nvme_identify/identify.o 00:03:25.896 CC app/spdk_nvme_discover/discovery_aer.o 00:03:26.154 LINK spdk_lspci 00:03:26.154 CXX test/cpp_headers/blob_bdev.o 00:03:26.154 CC test/env/memory/memory_ut.o 00:03:26.412 CXX test/cpp_headers/blobfs_bdev.o 00:03:26.412 LINK spdk_tgt 00:03:26.412 LINK spdk_nvme_discover 00:03:26.412 CC test/event/scheduler/scheduler.o 00:03:26.670 CC test/rpc_client/rpc_client_test.o 00:03:26.670 CXX test/cpp_headers/blobfs.o 00:03:26.670 CXX test/cpp_headers/blob.o 00:03:26.928 CC test/env/pci/pci_ut.o 00:03:26.928 LINK rpc_client_test 00:03:26.928 LINK scheduler 00:03:27.186 CXX test/cpp_headers/conf.o 00:03:27.186 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:27.186 LINK spdk_nvme_perf 00:03:27.444 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:27.444 CXX test/cpp_headers/config.o 00:03:27.444 CXX test/cpp_headers/cpuset.o 00:03:27.702 LINK spdk_nvme_identify 00:03:27.960 CXX test/cpp_headers/crc16.o 00:03:27.960 LINK pci_ut 00:03:27.960 CC test/accel/dif/dif.o 00:03:27.960 CC test/blobfs/mkfs/mkfs.o 00:03:27.960 CC app/spdk_top/spdk_top.o 00:03:27.960 LINK vhost_fuzz 00:03:28.218 CC test/lvol/esnap/esnap.o 00:03:28.218 CXX test/cpp_headers/crc32.o 00:03:28.218 LINK memory_ut 00:03:28.476 LINK iscsi_fuzz 00:03:28.476 LINK mkfs 00:03:28.476 CXX test/cpp_headers/crc64.o 00:03:28.476 CC app/spdk_dd/spdk_dd.o 00:03:28.734 LINK dif 00:03:28.734 CXX test/cpp_headers/dif.o 00:03:28.734 CC app/vhost/vhost.o 00:03:28.734 CXX test/cpp_headers/dma.o 00:03:28.992 CXX test/cpp_headers/endian.o 00:03:28.992 CXX test/cpp_headers/env_dpdk.o 00:03:28.992 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:29.250 CXX test/cpp_headers/env.o 00:03:29.250 LINK vhost 00:03:29.508 LINK spdk_dd 00:03:29.765 CC test/nvme/aer/aer.o 00:03:29.766 LINK interrupt_tgt 00:03:29.766 CXX test/cpp_headers/event.o 00:03:29.766 CXX test/cpp_headers/fd_group.o 00:03:29.766 LINK spdk_top 00:03:29.766 CC test/nvme/reset/reset.o 00:03:30.044 CC test/bdev/bdevio/bdevio.o 00:03:30.044 CXX test/cpp_headers/fd.o 00:03:30.301 LINK aer 00:03:30.301 CC test/nvme/e2edp/nvme_dp.o 00:03:30.301 CC test/nvme/sgl/sgl.o 00:03:30.301 CC test/nvme/overhead/overhead.o 00:03:30.301 CXX test/cpp_headers/file.o 00:03:30.559 LINK reset 00:03:30.559 CC app/fio/nvme/fio_plugin.o 00:03:30.816 CXX test/cpp_headers/ftl.o 00:03:30.816 LINK sgl 00:03:30.816 LINK bdevio 00:03:30.816 LINK overhead 00:03:31.072 LINK nvme_dp 00:03:31.072 CXX test/cpp_headers/gpt_spec.o 00:03:31.072 CXX test/cpp_headers/hexlify.o 00:03:31.072 CXX test/cpp_headers/histogram_data.o 00:03:31.329 CC examples/thread/thread/thread_ex.o 00:03:31.329 CC examples/sock/hello_world/hello_sock.o 00:03:31.329 CXX test/cpp_headers/idxd.o 00:03:31.585 CC test/nvme/err_injection/err_injection.o 00:03:31.585 CXX test/cpp_headers/idxd_spec.o 00:03:31.585 LINK spdk_nvme 00:03:31.585 LINK thread 00:03:31.842 CC examples/vmd/lsvmd/lsvmd.o 00:03:31.842 CC app/fio/bdev/fio_plugin.o 00:03:31.842 CC examples/idxd/perf/perf.o 00:03:31.842 LINK hello_sock 00:03:31.842 CXX test/cpp_headers/init.o 00:03:31.842 LINK err_injection 00:03:31.842 CC examples/vmd/led/led.o 00:03:31.842 LINK lsvmd 00:03:32.099 CXX test/cpp_headers/ioat.o 00:03:32.099 LINK led 00:03:32.356 CC test/nvme/reserve/reserve.o 00:03:32.356 CC test/nvme/simple_copy/simple_copy.o 00:03:32.356 CC test/nvme/startup/startup.o 00:03:32.356 CXX test/cpp_headers/ioat_spec.o 00:03:32.356 LINK idxd_perf 00:03:32.356 CC test/nvme/connect_stress/connect_stress.o 00:03:32.356 CXX test/cpp_headers/iscsi_spec.o 00:03:32.613 LINK startup 00:03:32.613 LINK reserve 00:03:32.613 LINK spdk_bdev 00:03:32.613 LINK simple_copy 00:03:32.613 CXX test/cpp_headers/json.o 00:03:32.613 LINK connect_stress 00:03:32.870 CC test/nvme/boot_partition/boot_partition.o 00:03:32.870 CC test/nvme/compliance/nvme_compliance.o 00:03:32.870 CXX test/cpp_headers/jsonrpc.o 00:03:33.128 CC test/nvme/fused_ordering/fused_ordering.o 00:03:33.128 CXX test/cpp_headers/keyring.o 00:03:33.128 CXX test/cpp_headers/keyring_module.o 00:03:33.128 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.128 LINK boot_partition 00:03:33.386 CC examples/nvme/hello_world/hello_world.o 00:03:33.386 CC examples/accel/perf/accel_perf.o 00:03:33.386 LINK nvme_compliance 00:03:33.386 LINK fused_ordering 00:03:33.386 LINK doorbell_aers 00:03:33.386 CXX test/cpp_headers/likely.o 00:03:33.645 CXX test/cpp_headers/log.o 00:03:33.903 LINK hello_world 00:03:33.903 CC examples/blob/hello_world/hello_blob.o 00:03:33.903 CC examples/blob/cli/blobcli.o 00:03:33.903 CC examples/nvme/reconnect/reconnect.o 00:03:33.903 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:33.903 CC test/nvme/fdp/fdp.o 00:03:33.903 CXX test/cpp_headers/lvol.o 00:03:33.903 CXX test/cpp_headers/memory.o 00:03:34.161 LINK accel_perf 00:03:34.161 CC examples/nvme/arbitration/arbitration.o 00:03:34.161 LINK hello_blob 00:03:34.419 CXX test/cpp_headers/mmio.o 00:03:34.419 CC examples/nvme/hotplug/hotplug.o 00:03:34.419 CXX test/cpp_headers/nbd.o 00:03:34.419 LINK reconnect 00:03:34.678 LINK fdp 00:03:34.678 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:34.678 CXX test/cpp_headers/notify.o 00:03:34.678 LINK blobcli 00:03:34.678 LINK nvme_manage 00:03:34.678 LINK arbitration 00:03:34.678 CC examples/nvme/abort/abort.o 00:03:34.936 LINK hotplug 00:03:34.936 CXX test/cpp_headers/nvme.o 00:03:34.936 LINK cmb_copy 00:03:34.936 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:34.936 CC test/nvme/cuse/cuse.o 00:03:34.936 CXX test/cpp_headers/nvme_intel.o 00:03:34.936 CXX test/cpp_headers/nvme_ocssd.o 00:03:35.194 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:35.194 LINK pmr_persistence 00:03:35.194 CXX test/cpp_headers/nvme_spec.o 00:03:35.194 CXX test/cpp_headers/nvme_zns.o 00:03:35.452 CC examples/bdev/hello_world/hello_bdev.o 00:03:35.452 CXX test/cpp_headers/nvmf_cmd.o 00:03:35.452 LINK abort 00:03:35.452 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:35.452 CC examples/bdev/bdevperf/bdevperf.o 00:03:35.452 CXX test/cpp_headers/nvmf.o 00:03:35.723 CXX test/cpp_headers/nvmf_spec.o 00:03:35.723 CXX test/cpp_headers/nvmf_transport.o 00:03:35.723 CXX test/cpp_headers/opal.o 00:03:35.723 CXX test/cpp_headers/opal_spec.o 00:03:35.723 LINK hello_bdev 00:03:35.723 CXX test/cpp_headers/pci_ids.o 00:03:35.981 CXX test/cpp_headers/pipe.o 00:03:35.981 CXX test/cpp_headers/queue.o 00:03:35.981 CXX test/cpp_headers/reduce.o 00:03:35.981 CXX test/cpp_headers/rpc.o 00:03:35.981 CXX test/cpp_headers/scheduler.o 00:03:35.981 CXX test/cpp_headers/scsi.o 00:03:35.981 CXX test/cpp_headers/scsi_spec.o 00:03:36.242 CXX test/cpp_headers/sock.o 00:03:36.242 CXX test/cpp_headers/stdinc.o 00:03:36.242 CXX test/cpp_headers/string.o 00:03:36.242 CXX test/cpp_headers/thread.o 00:03:36.242 CXX test/cpp_headers/trace.o 00:03:36.515 CXX test/cpp_headers/trace_parser.o 00:03:36.515 CXX test/cpp_headers/tree.o 00:03:36.515 CXX test/cpp_headers/ublk.o 00:03:36.515 CXX test/cpp_headers/util.o 00:03:36.515 CXX test/cpp_headers/uuid.o 00:03:36.515 CXX test/cpp_headers/version.o 00:03:36.515 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.515 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.515 CXX test/cpp_headers/vhost.o 00:03:36.774 LINK esnap 00:03:36.774 CXX test/cpp_headers/vmd.o 00:03:36.774 CXX test/cpp_headers/xor.o 00:03:36.774 CXX test/cpp_headers/zipf.o 00:03:36.774 LINK bdevperf 00:03:37.031 LINK cuse 00:03:37.289 CC examples/nvmf/nvmf/nvmf.o 00:03:37.855 LINK nvmf 00:03:38.113 00:03:38.113 real 1m31.220s 00:03:38.113 user 10m15.837s 00:03:38.113 sys 2m3.205s 00:03:38.113 12:47:50 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:38.113 12:47:50 make -- common/autotest_common.sh@10 -- $ set +x 00:03:38.113 ************************************ 00:03:38.113 END TEST make 00:03:38.113 ************************************ 00:03:38.113 12:47:50 -- common/autotest_common.sh@1142 -- $ return 0 00:03:38.113 12:47:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:38.113 12:47:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:38.113 12:47:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:38.113 12:47:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.113 12:47:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:38.113 12:47:50 -- pm/common@44 -- $ pid=5194 00:03:38.113 12:47:50 -- pm/common@50 -- $ kill -TERM 5194 00:03:38.113 12:47:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.113 12:47:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:38.113 12:47:50 -- pm/common@44 -- $ pid=5196 00:03:38.113 12:47:50 -- pm/common@50 -- $ kill -TERM 5196 00:03:38.371 12:47:50 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:38.371 12:47:50 -- nvmf/common.sh@7 -- # uname -s 00:03:38.371 12:47:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:38.371 12:47:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:38.371 12:47:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:38.371 12:47:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:38.371 12:47:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:38.371 12:47:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:38.372 12:47:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:38.372 12:47:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:38.372 12:47:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:38.372 12:47:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:38.372 12:47:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:03:38.372 12:47:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:03:38.372 12:47:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:38.372 12:47:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:38.372 12:47:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:38.372 12:47:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:38.372 12:47:50 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:38.372 12:47:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:38.372 12:47:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:38.372 12:47:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:38.372 12:47:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.372 12:47:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.372 12:47:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.372 12:47:50 -- paths/export.sh@5 -- # export PATH 00:03:38.372 12:47:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.372 12:47:50 -- nvmf/common.sh@51 -- # : 0 00:03:38.372 12:47:50 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:38.372 12:47:50 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:38.372 12:47:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:38.372 12:47:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:38.372 12:47:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:38.372 12:47:50 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:38.372 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:38.372 12:47:50 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:38.372 12:47:50 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:38.372 12:47:50 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:38.372 12:47:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:38.372 12:47:50 -- spdk/autotest.sh@32 -- # uname -s 00:03:38.372 12:47:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:38.372 12:47:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:38.372 12:47:50 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.372 12:47:50 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:38.372 12:47:50 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.372 12:47:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:38.372 12:47:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:38.372 12:47:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:38.372 12:47:50 -- spdk/autotest.sh@48 -- # udevadm_pid=54826 00:03:38.372 12:47:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:38.372 12:47:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:38.372 12:47:50 -- pm/common@17 -- # local monitor 00:03:38.372 12:47:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.372 12:47:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.372 12:47:50 -- pm/common@21 -- # date +%s 00:03:38.372 12:47:50 -- pm/common@25 -- # sleep 1 00:03:38.372 12:47:50 -- pm/common@21 -- # date +%s 00:03:38.372 12:47:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721047670 00:03:38.372 12:47:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721047670 00:03:38.372 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721047670_collect-vmstat.pm.log 00:03:38.372 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721047670_collect-cpu-load.pm.log 00:03:39.304 12:47:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:39.304 12:47:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:39.304 12:47:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:39.304 12:47:51 -- common/autotest_common.sh@10 -- # set +x 00:03:39.304 12:47:51 -- spdk/autotest.sh@59 -- # create_test_list 00:03:39.304 12:47:51 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:39.304 12:47:51 -- common/autotest_common.sh@10 -- # set +x 00:03:39.304 12:47:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:39.304 12:47:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:39.304 12:47:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:39.304 12:47:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:39.304 12:47:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:39.304 12:47:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:39.304 12:47:51 -- common/autotest_common.sh@1455 -- # uname 00:03:39.304 12:47:51 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:39.304 12:47:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:39.304 12:47:51 -- common/autotest_common.sh@1475 -- # uname 00:03:39.304 12:47:51 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:39.304 12:47:51 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:39.304 12:47:51 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:39.304 12:47:51 -- spdk/autotest.sh@72 -- # hash lcov 00:03:39.304 12:47:51 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:39.304 12:47:51 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:39.304 --rc lcov_branch_coverage=1 00:03:39.304 --rc lcov_function_coverage=1 00:03:39.304 --rc genhtml_branch_coverage=1 00:03:39.304 --rc genhtml_function_coverage=1 00:03:39.304 --rc genhtml_legend=1 00:03:39.304 --rc geninfo_all_blocks=1 00:03:39.304 ' 00:03:39.304 12:47:51 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:39.304 --rc lcov_branch_coverage=1 00:03:39.304 --rc lcov_function_coverage=1 00:03:39.304 --rc genhtml_branch_coverage=1 00:03:39.304 --rc genhtml_function_coverage=1 00:03:39.304 --rc genhtml_legend=1 00:03:39.304 --rc geninfo_all_blocks=1 00:03:39.304 ' 00:03:39.304 12:47:51 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:39.304 --rc lcov_branch_coverage=1 00:03:39.304 --rc lcov_function_coverage=1 00:03:39.304 --rc genhtml_branch_coverage=1 00:03:39.304 --rc genhtml_function_coverage=1 00:03:39.304 --rc genhtml_legend=1 00:03:39.304 --rc geninfo_all_blocks=1 00:03:39.304 --no-external' 00:03:39.304 12:47:51 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:39.304 --rc lcov_branch_coverage=1 00:03:39.304 --rc lcov_function_coverage=1 00:03:39.304 --rc genhtml_branch_coverage=1 00:03:39.304 --rc genhtml_function_coverage=1 00:03:39.304 --rc genhtml_legend=1 00:03:39.304 --rc geninfo_all_blocks=1 00:03:39.304 --no-external' 00:03:39.304 12:47:51 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:39.561 lcov: LCOV version 1.14 00:03:39.561 12:47:51 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:57.651 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:57.651 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:12.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:12.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:12.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:12.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:15.053 12:48:27 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:15.053 12:48:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.053 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:04:15.053 12:48:27 -- spdk/autotest.sh@91 -- # rm -f 00:04:15.053 12:48:27 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.619 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:15.619 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:15.619 12:48:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:15.619 12:48:28 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:15.620 12:48:28 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:15.620 12:48:28 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:15.620 12:48:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.620 12:48:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:15.620 12:48:28 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:15.620 12:48:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.620 12:48:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.620 12:48:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.620 12:48:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:15.620 12:48:28 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:15.620 12:48:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:15.620 12:48:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.620 12:48:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.620 12:48:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:15.620 12:48:28 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:15.620 12:48:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:15.620 12:48:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.620 12:48:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.620 12:48:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:15.620 12:48:28 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:15.620 12:48:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:15.620 12:48:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.620 12:48:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:15.620 12:48:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.620 12:48:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:15.620 12:48:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:15.620 12:48:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:15.620 12:48:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:15.620 No valid GPT data, bailing 00:04:15.620 12:48:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.878 12:48:28 -- scripts/common.sh@391 -- # pt= 00:04:15.878 12:48:28 -- scripts/common.sh@392 -- # return 1 00:04:15.878 12:48:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:15.878 1+0 records in 00:04:15.878 1+0 records out 00:04:15.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00343394 s, 305 MB/s 00:04:15.878 12:48:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.878 12:48:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:15.878 12:48:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:15.878 12:48:28 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:15.878 12:48:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:15.878 No valid GPT data, bailing 00:04:15.878 12:48:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:15.878 12:48:28 -- scripts/common.sh@391 -- # pt= 00:04:15.878 12:48:28 -- scripts/common.sh@392 -- # return 1 00:04:15.878 12:48:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:15.878 1+0 records in 00:04:15.878 1+0 records out 00:04:15.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391075 s, 268 MB/s 00:04:15.878 12:48:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.878 12:48:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:15.878 12:48:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:15.878 12:48:28 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:15.878 12:48:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:15.878 No valid GPT data, bailing 00:04:15.878 12:48:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:15.878 12:48:28 -- scripts/common.sh@391 -- # pt= 00:04:15.878 12:48:28 -- scripts/common.sh@392 -- # return 1 00:04:15.878 12:48:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:15.878 1+0 records in 00:04:15.878 1+0 records out 00:04:15.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00287452 s, 365 MB/s 00:04:15.879 12:48:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.879 12:48:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:15.879 12:48:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:15.879 12:48:28 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:15.879 12:48:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:15.879 No valid GPT data, bailing 00:04:15.879 12:48:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:15.879 12:48:28 -- scripts/common.sh@391 -- # pt= 00:04:15.879 12:48:28 -- scripts/common.sh@392 -- # return 1 00:04:15.879 12:48:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:15.879 1+0 records in 00:04:15.879 1+0 records out 00:04:15.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00327565 s, 320 MB/s 00:04:15.879 12:48:28 -- spdk/autotest.sh@118 -- # sync 00:04:16.137 12:48:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.137 12:48:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.137 12:48:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:17.509 12:48:29 -- spdk/autotest.sh@124 -- # uname -s 00:04:17.509 12:48:29 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:17.509 12:48:29 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:17.509 12:48:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.509 12:48:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.509 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:04:17.509 ************************************ 00:04:17.509 START TEST setup.sh 00:04:17.509 ************************************ 00:04:17.509 12:48:29 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:17.767 * Looking for test storage... 00:04:17.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:17.767 12:48:30 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:17.767 12:48:30 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:17.767 12:48:30 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:17.767 12:48:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.767 12:48:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.767 12:48:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.767 ************************************ 00:04:17.767 START TEST acl 00:04:17.767 ************************************ 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:17.767 * Looking for test storage... 00:04:17.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:17.767 12:48:30 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:17.767 12:48:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.767 12:48:30 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:17.767 12:48:30 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:17.767 12:48:30 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:17.767 12:48:30 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:17.767 12:48:30 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:17.767 12:48:30 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.767 12:48:30 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.332 12:48:30 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:18.332 12:48:30 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:18.332 12:48:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.332 12:48:30 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:18.332 12:48:30 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.332 12:48:30 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.899 12:48:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:18.899 12:48:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:18.899 12:48:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.157 Hugepages 00:04:19.157 node hugesize free / total 00:04:19.157 12:48:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:19.157 12:48:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:19.157 12:48:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.157 00:04:19.157 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:19.157 12:48:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:19.157 12:48:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:19.157 12:48:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.157 12:48:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:19.157 12:48:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:19.157 12:48:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:19.158 12:48:31 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:19.158 12:48:31 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.158 12:48:31 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.158 12:48:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:19.158 ************************************ 00:04:19.158 START TEST denied 00:04:19.158 ************************************ 00:04:19.158 12:48:31 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:19.158 12:48:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:19.158 12:48:31 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:19.158 12:48:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:19.158 12:48:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.158 12:48:31 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.115 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.115 12:48:32 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.679 00:04:20.679 real 0m1.307s 00:04:20.679 user 0m0.525s 00:04:20.679 sys 0m0.714s 00:04:20.679 12:48:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.679 12:48:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:20.679 ************************************ 00:04:20.679 END TEST denied 00:04:20.679 ************************************ 00:04:20.679 12:48:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:20.679 12:48:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:20.679 12:48:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.679 12:48:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.679 12:48:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:20.679 ************************************ 00:04:20.679 START TEST allowed 00:04:20.679 ************************************ 00:04:20.679 12:48:32 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:20.679 12:48:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:20.679 12:48:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:20.679 12:48:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:20.679 12:48:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.679 12:48:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.244 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.244 12:48:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.178 00:04:22.178 real 0m1.383s 00:04:22.178 user 0m0.631s 00:04:22.178 sys 0m0.738s 00:04:22.178 12:48:34 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.178 ************************************ 00:04:22.178 END TEST allowed 00:04:22.178 ************************************ 00:04:22.178 12:48:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:22.178 12:48:34 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:22.178 ************************************ 00:04:22.178 END TEST acl 00:04:22.178 ************************************ 00:04:22.178 00:04:22.178 real 0m4.311s 00:04:22.178 user 0m1.941s 00:04:22.178 sys 0m2.302s 00:04:22.178 12:48:34 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.178 12:48:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:22.178 12:48:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:22.178 12:48:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:22.178 12:48:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.178 12:48:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.178 12:48:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.178 ************************************ 00:04:22.178 START TEST hugepages 00:04:22.178 ************************************ 00:04:22.178 12:48:34 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:22.178 * Looking for test storage... 00:04:22.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 5887460 kB' 'MemAvailable: 7397340 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 476988 kB' 'Inactive: 1351256 kB' 'Active(anon): 114788 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351256 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 106156 kB' 'Mapped: 48716 kB' 'Shmem: 10488 kB' 'KReclaimable: 67176 kB' 'Slab: 143112 kB' 'SReclaimable: 67176 kB' 'SUnreclaim: 75936 kB' 'KernelStack: 6300 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 325712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.178 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:22.179 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:22.180 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.180 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.180 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.180 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.180 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:22.180 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:22.180 12:48:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:22.180 12:48:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.180 12:48:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.180 12:48:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.180 ************************************ 00:04:22.180 START TEST default_setup 00:04:22.180 ************************************ 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.180 12:48:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.010 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.010 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7990548 kB' 'MemAvailable: 9500284 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 494008 kB' 'Inactive: 1351272 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 123004 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142856 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 76000 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.010 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.011 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7990548 kB' 'MemAvailable: 9500284 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493616 kB' 'Inactive: 1351272 kB' 'Active(anon): 131416 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 122564 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142848 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75992 kB' 'KernelStack: 6256 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.012 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7990548 kB' 'MemAvailable: 9500284 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493420 kB' 'Inactive: 1351272 kB' 'Active(anon): 131220 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 122344 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142840 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75984 kB' 'KernelStack: 6272 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.013 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.014 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:23.015 nr_hugepages=1024 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.015 resv_hugepages=0 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.015 surplus_hugepages=0 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.015 anon_hugepages=0 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7990548 kB' 'MemAvailable: 9500284 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493420 kB' 'Inactive: 1351272 kB' 'Active(anon): 131220 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 122344 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142840 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75984 kB' 'KernelStack: 6272 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.015 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.016 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7990548 kB' 'MemUsed: 4251416 kB' 'SwapCached: 0 kB' 'Active: 493668 kB' 'Inactive: 1351272 kB' 'Active(anon): 131468 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'FilePages: 1723944 kB' 'Mapped: 48772 kB' 'AnonPages: 122676 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66856 kB' 'Slab: 142832 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.017 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.018 node0=1024 expecting 1024 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.018 00:04:23.018 real 0m0.950s 00:04:23.018 user 0m0.453s 00:04:23.018 sys 0m0.463s 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.018 12:48:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:23.018 ************************************ 00:04:23.018 END TEST default_setup 00:04:23.018 ************************************ 00:04:23.277 12:48:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:23.277 12:48:35 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:23.277 12:48:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.277 12:48:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.277 12:48:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.277 ************************************ 00:04:23.277 START TEST per_node_1G_alloc 00:04:23.277 ************************************ 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:23.277 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.278 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.540 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.540 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.540 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9037312 kB' 'MemAvailable: 10547056 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 494204 kB' 'Inactive: 1351280 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123128 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142824 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75968 kB' 'KernelStack: 6292 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.541 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9037312 kB' 'MemAvailable: 10547056 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493660 kB' 'Inactive: 1351280 kB' 'Active(anon): 131460 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122568 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142820 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75964 kB' 'KernelStack: 6256 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.542 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.543 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9037312 kB' 'MemAvailable: 10547056 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493396 kB' 'Inactive: 1351280 kB' 'Active(anon): 131196 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122352 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142828 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75972 kB' 'KernelStack: 6272 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.544 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.545 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.546 nr_hugepages=512 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.546 resv_hugepages=0 00:04:23.546 surplus_hugepages=0 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.546 anon_hugepages=0 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9037312 kB' 'MemAvailable: 10547056 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493684 kB' 'Inactive: 1351280 kB' 'Active(anon): 131484 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122640 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142828 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75972 kB' 'KernelStack: 6272 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.546 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.547 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9037312 kB' 'MemUsed: 3204652 kB' 'SwapCached: 0 kB' 'Active: 493592 kB' 'Inactive: 1351280 kB' 'Active(anon): 131392 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1723944 kB' 'Mapped: 48716 kB' 'AnonPages: 122504 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66856 kB' 'Slab: 142828 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.548 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.549 node0=512 expecting 512 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:23.549 00:04:23.549 real 0m0.476s 00:04:23.549 user 0m0.235s 00:04:23.549 sys 0m0.248s 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.549 12:48:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.549 ************************************ 00:04:23.549 END TEST per_node_1G_alloc 00:04:23.549 ************************************ 00:04:23.807 12:48:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:23.807 12:48:36 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:23.807 12:48:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.807 12:48:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.807 12:48:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.807 ************************************ 00:04:23.807 START TEST even_2G_alloc 00:04:23.807 ************************************ 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.807 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.069 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.069 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.069 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7986548 kB' 'MemAvailable: 9496292 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 494480 kB' 'Inactive: 1351280 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123396 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142772 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75916 kB' 'KernelStack: 6292 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.070 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7986548 kB' 'MemAvailable: 9496292 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493892 kB' 'Inactive: 1351280 kB' 'Active(anon): 131692 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122768 kB' 'Mapped: 48900 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142772 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75916 kB' 'KernelStack: 6212 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.071 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7988348 kB' 'MemAvailable: 9498092 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493688 kB' 'Inactive: 1351280 kB' 'Active(anon): 131488 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122644 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142768 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75912 kB' 'KernelStack: 6272 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.072 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.073 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.074 nr_hugepages=1024 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.074 resv_hugepages=0 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.074 surplus_hugepages=0 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.074 anon_hugepages=0 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7988348 kB' 'MemAvailable: 9498092 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493612 kB' 'Inactive: 1351280 kB' 'Active(anon): 131412 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122520 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142768 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75912 kB' 'KernelStack: 6240 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.074 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.075 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7988612 kB' 'MemUsed: 4253352 kB' 'SwapCached: 0 kB' 'Active: 493916 kB' 'Inactive: 1351280 kB' 'Active(anon): 131716 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1723944 kB' 'Mapped: 48976 kB' 'AnonPages: 122940 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66856 kB' 'Slab: 142768 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.076 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.077 node0=1024 expecting 1024 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.077 00:04:24.077 real 0m0.471s 00:04:24.077 user 0m0.252s 00:04:24.077 sys 0m0.248s 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.077 ************************************ 00:04:24.077 END TEST even_2G_alloc 00:04:24.077 12:48:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.077 ************************************ 00:04:24.077 12:48:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:24.077 12:48:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:24.077 12:48:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.077 12:48:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.077 12:48:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.077 ************************************ 00:04:24.077 START TEST odd_alloc 00:04:24.077 ************************************ 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.077 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.346 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.346 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.346 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7988180 kB' 'MemAvailable: 9497924 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493872 kB' 'Inactive: 1351280 kB' 'Active(anon): 131672 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142744 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75888 kB' 'KernelStack: 6244 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.620 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7987928 kB' 'MemAvailable: 9497672 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493848 kB' 'Inactive: 1351280 kB' 'Active(anon): 131648 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122548 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142736 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75880 kB' 'KernelStack: 6272 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7987928 kB' 'MemAvailable: 9497672 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493424 kB' 'Inactive: 1351280 kB' 'Active(anon): 131224 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122376 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142732 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75876 kB' 'KernelStack: 6272 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.622 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:24.623 nr_hugepages=1025 00:04:24.623 resv_hugepages=0 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.623 surplus_hugepages=0 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.623 anon_hugepages=0 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.623 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7987928 kB' 'MemAvailable: 9497672 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493696 kB' 'Inactive: 1351280 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122648 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142732 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75876 kB' 'KernelStack: 6272 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.624 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7987928 kB' 'MemUsed: 4254036 kB' 'SwapCached: 0 kB' 'Active: 493668 kB' 'Inactive: 1351280 kB' 'Active(anon): 131468 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1723944 kB' 'Mapped: 48716 kB' 'AnonPages: 122632 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66856 kB' 'Slab: 142732 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.625 node0=1025 expecting 1025 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:24.625 00:04:24.625 real 0m0.480s 00:04:24.625 user 0m0.255s 00:04:24.625 sys 0m0.254s 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.625 12:48:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.625 ************************************ 00:04:24.625 END TEST odd_alloc 00:04:24.625 ************************************ 00:04:24.625 12:48:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:24.625 12:48:37 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:24.625 12:48:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.625 12:48:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.625 12:48:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.625 ************************************ 00:04:24.625 START TEST custom_alloc 00:04:24.625 ************************************ 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:24.625 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.626 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.883 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.883 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9033336 kB' 'MemAvailable: 10543080 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 494164 kB' 'Inactive: 1351280 kB' 'Active(anon): 131964 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123120 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142752 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75896 kB' 'KernelStack: 6244 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.145 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.146 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9035788 kB' 'MemAvailable: 10545532 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493712 kB' 'Inactive: 1351280 kB' 'Active(anon): 131512 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122660 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142760 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75904 kB' 'KernelStack: 6272 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.147 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9035788 kB' 'MemAvailable: 10545532 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493452 kB' 'Inactive: 1351280 kB' 'Active(anon): 131252 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122396 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142756 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75900 kB' 'KernelStack: 6272 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.150 nr_hugepages=512 00:04:25.150 resv_hugepages=0 00:04:25.150 surplus_hugepages=0 00:04:25.150 anon_hugepages=0 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:25.150 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9035788 kB' 'MemAvailable: 10545528 kB' 'Buffers: 2436 kB' 'Cached: 1721504 kB' 'SwapCached: 0 kB' 'Active: 494064 kB' 'Inactive: 1351276 kB' 'Active(anon): 131864 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123024 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142760 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75904 kB' 'KernelStack: 6304 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.151 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9035788 kB' 'MemUsed: 3206176 kB' 'SwapCached: 0 kB' 'Active: 493680 kB' 'Inactive: 1351280 kB' 'Active(anon): 131480 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1723944 kB' 'Mapped: 48716 kB' 'AnonPages: 122564 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66856 kB' 'Slab: 142752 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.153 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.154 node0=512 expecting 512 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:25.154 00:04:25.154 real 0m0.522s 00:04:25.154 user 0m0.279s 00:04:25.154 sys 0m0.236s 00:04:25.154 ************************************ 00:04:25.154 END TEST custom_alloc 00:04:25.154 ************************************ 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.154 12:48:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.154 12:48:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:25.154 12:48:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:25.154 12:48:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.154 12:48:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.154 12:48:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.154 ************************************ 00:04:25.154 START TEST no_shrink_alloc 00:04:25.154 ************************************ 00:04:25.154 12:48:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:25.154 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:25.154 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.411 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.728 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.728 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.728 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7989104 kB' 'MemAvailable: 9498848 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 494136 kB' 'Inactive: 1351280 kB' 'Active(anon): 131936 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123056 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142784 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75928 kB' 'KernelStack: 6260 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.729 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7989104 kB' 'MemAvailable: 9498848 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493736 kB' 'Inactive: 1351280 kB' 'Active(anon): 131536 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122644 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142796 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75940 kB' 'KernelStack: 6288 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7989396 kB' 'MemAvailable: 9499140 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493724 kB' 'Inactive: 1351280 kB' 'Active(anon): 131524 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122676 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142788 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75932 kB' 'KernelStack: 6272 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.732 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.734 nr_hugepages=1024 00:04:25.734 resv_hugepages=0 00:04:25.734 surplus_hugepages=0 00:04:25.734 anon_hugepages=0 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7989396 kB' 'MemAvailable: 9499140 kB' 'Buffers: 2436 kB' 'Cached: 1721508 kB' 'SwapCached: 0 kB' 'Active: 493476 kB' 'Inactive: 1351280 kB' 'Active(anon): 131276 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122420 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142788 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75932 kB' 'KernelStack: 6272 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.734 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7989656 kB' 'MemUsed: 4252308 kB' 'SwapCached: 0 kB' 'Active: 493808 kB' 'Inactive: 1351280 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351280 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1723944 kB' 'Mapped: 48716 kB' 'AnonPages: 122772 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66856 kB' 'Slab: 142780 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.737 node0=1024 expecting 1024 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.737 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:25.738 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:25.738 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:25.738 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.738 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.003 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.003 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.003 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7984804 kB' 'MemAvailable: 9494552 kB' 'Buffers: 2436 kB' 'Cached: 1721512 kB' 'SwapCached: 0 kB' 'Active: 494332 kB' 'Inactive: 1351284 kB' 'Active(anon): 132132 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123268 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142784 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75928 kB' 'KernelStack: 6260 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.003 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7984804 kB' 'MemAvailable: 9494552 kB' 'Buffers: 2436 kB' 'Cached: 1721512 kB' 'SwapCached: 0 kB' 'Active: 493764 kB' 'Inactive: 1351284 kB' 'Active(anon): 131564 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122672 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142776 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75920 kB' 'KernelStack: 6256 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.004 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.005 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.269 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7985172 kB' 'MemAvailable: 9494920 kB' 'Buffers: 2436 kB' 'Cached: 1721512 kB' 'SwapCached: 0 kB' 'Active: 494080 kB' 'Inactive: 1351284 kB' 'Active(anon): 131880 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122600 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142772 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75916 kB' 'KernelStack: 6304 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.270 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.271 nr_hugepages=1024 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.271 resv_hugepages=0 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.271 surplus_hugepages=0 00:04:26.271 anon_hugepages=0 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.271 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7985172 kB' 'MemAvailable: 9494920 kB' 'Buffers: 2436 kB' 'Cached: 1721512 kB' 'SwapCached: 0 kB' 'Active: 494256 kB' 'Inactive: 1351284 kB' 'Active(anon): 132056 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66856 kB' 'Slab: 142760 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75904 kB' 'KernelStack: 6288 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 342712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.272 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.273 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7985172 kB' 'MemUsed: 4256792 kB' 'SwapCached: 0 kB' 'Active: 493716 kB' 'Inactive: 1351284 kB' 'Active(anon): 131516 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351284 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1723948 kB' 'Mapped: 48720 kB' 'AnonPages: 122604 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66856 kB' 'Slab: 142748 kB' 'SReclaimable: 66856 kB' 'SUnreclaim: 75892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.274 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.275 node0=1024 expecting 1024 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.275 00:04:26.275 real 0m0.961s 00:04:26.275 user 0m0.501s 00:04:26.275 sys 0m0.468s 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.275 ************************************ 00:04:26.275 END TEST no_shrink_alloc 00:04:26.275 ************************************ 00:04:26.275 12:48:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.275 12:48:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:26.275 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:26.275 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:26.275 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.275 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.275 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.275 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.275 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.275 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:26.275 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:26.275 00:04:26.275 real 0m4.235s 00:04:26.275 user 0m2.119s 00:04:26.275 sys 0m2.133s 00:04:26.275 ************************************ 00:04:26.275 END TEST hugepages 00:04:26.275 ************************************ 00:04:26.275 12:48:38 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.275 12:48:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.275 12:48:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:26.275 12:48:38 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:26.275 12:48:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.275 12:48:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.275 12:48:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.275 ************************************ 00:04:26.275 START TEST driver 00:04:26.275 ************************************ 00:04:26.275 12:48:38 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:26.275 * Looking for test storage... 00:04:26.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:26.275 12:48:38 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:26.275 12:48:38 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.275 12:48:38 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.840 12:48:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:26.840 12:48:39 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.840 12:48:39 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.840 12:48:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:26.840 ************************************ 00:04:26.840 START TEST guess_driver 00:04:26.840 ************************************ 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:26.840 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:26.840 Looking for driver=uio_pci_generic 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.840 12:48:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.405 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:27.405 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:27.405 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.664 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.664 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:27.664 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.664 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.664 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:27.664 12:48:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.664 12:48:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:27.664 12:48:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:27.664 12:48:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.664 12:48:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.316 00:04:28.316 real 0m1.322s 00:04:28.316 user 0m0.480s 00:04:28.316 sys 0m0.843s 00:04:28.316 12:48:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.316 12:48:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:28.316 ************************************ 00:04:28.316 END TEST guess_driver 00:04:28.316 ************************************ 00:04:28.316 12:48:40 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:28.316 00:04:28.316 real 0m1.944s 00:04:28.316 user 0m0.680s 00:04:28.316 sys 0m1.314s 00:04:28.316 ************************************ 00:04:28.316 END TEST driver 00:04:28.316 ************************************ 00:04:28.316 12:48:40 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.316 12:48:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:28.316 12:48:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:28.316 12:48:40 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:28.316 12:48:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.316 12:48:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.316 12:48:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:28.316 ************************************ 00:04:28.316 START TEST devices 00:04:28.316 ************************************ 00:04:28.316 12:48:40 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:28.316 * Looking for test storage... 00:04:28.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:28.316 12:48:40 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:28.316 12:48:40 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:28.316 12:48:40 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.316 12:48:40 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:28.912 12:48:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:28.912 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:28.912 12:48:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:28.912 12:48:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:29.171 No valid GPT data, bailing 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:29.171 12:48:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:29.171 12:48:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:29.171 12:48:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:29.171 No valid GPT data, bailing 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:29.171 12:48:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:29.171 12:48:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:29.171 12:48:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:29.171 No valid GPT data, bailing 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:29.171 12:48:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:29.171 12:48:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:29.171 12:48:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:29.171 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:29.171 12:48:41 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:29.429 No valid GPT data, bailing 00:04:29.429 12:48:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:29.429 12:48:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:29.429 12:48:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:29.429 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:29.429 12:48:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:29.429 12:48:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:29.429 12:48:41 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:29.429 12:48:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:29.429 12:48:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.429 12:48:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:29.429 12:48:41 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:29.429 12:48:41 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:29.429 12:48:41 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:29.429 12:48:41 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.429 12:48:41 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.429 12:48:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:29.429 ************************************ 00:04:29.429 START TEST nvme_mount 00:04:29.429 ************************************ 00:04:29.429 12:48:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:29.429 12:48:41 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:29.429 12:48:41 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:29.429 12:48:41 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.429 12:48:41 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:29.430 12:48:41 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:30.364 Creating new GPT entries in memory. 00:04:30.364 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.364 other utilities. 00:04:30.364 12:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.364 12:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.364 12:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.364 12:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.364 12:48:42 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:31.297 Creating new GPT entries in memory. 00:04:31.297 The operation has completed successfully. 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59041 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.297 12:48:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.554 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.554 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:31.554 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.554 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.554 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.554 12:48:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:31.812 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.812 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.071 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:32.071 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:32.071 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.071 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.071 12:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.329 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.329 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.329 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.329 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.329 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.329 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.588 12:48:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:32.846 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.846 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:32.846 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.846 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.846 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:32.846 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.104 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.104 00:04:33.104 real 0m3.810s 00:04:33.104 user 0m0.640s 00:04:33.104 sys 0m0.915s 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.104 12:48:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:33.104 ************************************ 00:04:33.104 END TEST nvme_mount 00:04:33.104 ************************************ 00:04:33.104 12:48:45 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:33.104 12:48:45 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:33.104 12:48:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.104 12:48:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.104 12:48:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.104 ************************************ 00:04:33.104 START TEST dm_mount 00:04:33.104 ************************************ 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.104 12:48:45 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:34.477 Creating new GPT entries in memory. 00:04:34.477 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.477 other utilities. 00:04:34.477 12:48:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.477 12:48:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.477 12:48:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.477 12:48:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.477 12:48:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:35.453 Creating new GPT entries in memory. 00:04:35.453 The operation has completed successfully. 00:04:35.453 12:48:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.453 12:48:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.453 12:48:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.453 12:48:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.453 12:48:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:36.439 The operation has completed successfully. 00:04:36.439 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.439 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.439 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59468 00:04:36.439 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:36.439 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.439 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.439 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:36.439 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.440 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.698 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.698 12:48:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.698 12:48:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.956 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.956 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:36.956 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:36.956 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.956 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.956 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.956 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.956 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:37.215 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:37.215 00:04:37.215 real 0m4.050s 00:04:37.215 user 0m0.418s 00:04:37.215 sys 0m0.602s 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.215 ************************************ 00:04:37.215 12:48:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.215 END TEST dm_mount 00:04:37.215 ************************************ 00:04:37.215 12:48:49 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:37.215 12:48:49 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:37.215 12:48:49 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:37.215 12:48:49 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:37.215 12:48:49 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.215 12:48:49 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:37.215 12:48:49 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.215 12:48:49 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.472 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:37.472 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:37.473 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:37.473 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:37.473 12:48:49 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:37.473 12:48:49 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:37.473 12:48:49 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:37.473 12:48:49 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.473 12:48:49 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:37.473 12:48:49 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.473 12:48:49 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:37.473 ************************************ 00:04:37.473 END TEST devices 00:04:37.473 ************************************ 00:04:37.473 00:04:37.473 real 0m9.260s 00:04:37.473 user 0m1.675s 00:04:37.473 sys 0m2.024s 00:04:37.473 12:48:49 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.473 12:48:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.473 12:48:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:37.730 ************************************ 00:04:37.730 END TEST setup.sh 00:04:37.730 ************************************ 00:04:37.730 00:04:37.730 real 0m19.996s 00:04:37.730 user 0m6.493s 00:04:37.730 sys 0m7.937s 00:04:37.730 12:48:49 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.730 12:48:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.730 12:48:49 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.730 12:48:49 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:38.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.293 Hugepages 00:04:38.293 node hugesize free / total 00:04:38.293 node0 1048576kB 0 / 0 00:04:38.293 node0 2048kB 2048 / 2048 00:04:38.293 00:04:38.293 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.293 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:38.293 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:38.551 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:38.551 12:48:50 -- spdk/autotest.sh@130 -- # uname -s 00:04:38.551 12:48:50 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:38.551 12:48:50 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:38.551 12:48:50 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.117 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.375 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.375 12:48:51 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:40.336 12:48:52 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:40.336 12:48:52 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:40.336 12:48:52 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:40.336 12:48:52 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:40.336 12:48:52 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:40.336 12:48:52 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:40.336 12:48:52 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.336 12:48:52 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:40.336 12:48:52 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:40.336 12:48:52 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:40.336 12:48:52 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:40.336 12:48:52 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.593 Waiting for block devices as requested 00:04:40.593 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.850 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.850 12:48:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:40.850 12:48:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:40.850 12:48:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.850 12:48:53 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:40.850 12:48:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.850 12:48:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:40.850 12:48:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.850 12:48:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:40.850 12:48:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:40.850 12:48:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:40.850 12:48:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:40.850 12:48:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:40.850 12:48:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:40.850 12:48:53 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:40.850 12:48:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:40.850 12:48:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:40.850 12:48:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:40.850 12:48:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:40.850 12:48:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:40.850 12:48:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:40.850 12:48:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:40.850 12:48:53 -- common/autotest_common.sh@1557 -- # continue 00:04:40.851 12:48:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:40.851 12:48:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:40.851 12:48:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.851 12:48:53 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:40.851 12:48:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:40.851 12:48:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:40.851 12:48:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:40.851 12:48:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:40.851 12:48:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:40.851 12:48:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:40.851 12:48:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:40.851 12:48:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:40.851 12:48:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:40.851 12:48:53 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:40.851 12:48:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:40.851 12:48:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:40.851 12:48:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:40.851 12:48:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:40.851 12:48:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:40.851 12:48:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:40.851 12:48:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:40.851 12:48:53 -- common/autotest_common.sh@1557 -- # continue 00:04:40.851 12:48:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:40.851 12:48:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.851 12:48:53 -- common/autotest_common.sh@10 -- # set +x 00:04:41.108 12:48:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:41.108 12:48:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.108 12:48:53 -- common/autotest_common.sh@10 -- # set +x 00:04:41.108 12:48:53 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.672 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.672 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.929 12:48:54 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:41.929 12:48:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.929 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:04:41.929 12:48:54 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:41.929 12:48:54 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:41.929 12:48:54 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.929 12:48:54 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:41.929 12:48:54 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:41.929 12:48:54 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:41.929 12:48:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:41.929 12:48:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:41.929 12:48:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.929 12:48:54 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:41.929 12:48:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:41.929 12:48:54 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:41.929 12:48:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:41.929 12:48:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:41.929 12:48:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:41.929 12:48:54 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:41.929 12:48:54 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.929 12:48:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:41.929 12:48:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:41.929 12:48:54 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:41.929 12:48:54 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.929 12:48:54 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:41.929 12:48:54 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:41.929 12:48:54 -- common/autotest_common.sh@1593 -- # return 0 00:04:41.929 12:48:54 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:41.929 12:48:54 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:41.929 12:48:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:41.929 12:48:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:41.929 12:48:54 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:41.929 12:48:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.929 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:04:41.929 12:48:54 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:41.929 12:48:54 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.929 12:48:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.929 12:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.929 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:04:41.929 ************************************ 00:04:41.929 START TEST env 00:04:41.929 ************************************ 00:04:41.929 12:48:54 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.929 * Looking for test storage... 00:04:41.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:41.929 12:48:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.929 12:48:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.929 12:48:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.929 12:48:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.929 ************************************ 00:04:41.929 START TEST env_memory 00:04:41.929 ************************************ 00:04:41.929 12:48:54 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.929 00:04:41.929 00:04:41.929 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.929 http://cunit.sourceforge.net/ 00:04:41.929 00:04:41.929 00:04:41.929 Suite: memory 00:04:42.186 Test: alloc and free memory map ...[2024-07-15 12:48:54.414568] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.186 passed 00:04:42.186 Test: mem map translation ...[2024-07-15 12:48:54.447810] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.186 [2024-07-15 12:48:54.448519] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.186 [2024-07-15 12:48:54.449004] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.186 [2024-07-15 12:48:54.449044] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.186 passed 00:04:42.186 Test: mem map registration ...[2024-07-15 12:48:54.508079] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:42.186 [2024-07-15 12:48:54.508175] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:42.186 passed 00:04:42.186 Test: mem map adjacent registrations ...passed 00:04:42.186 00:04:42.186 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.186 suites 1 1 n/a 0 0 00:04:42.186 tests 4 4 4 0 0 00:04:42.186 asserts 152 152 152 0 n/a 00:04:42.186 00:04:42.186 Elapsed time = 0.201 seconds 00:04:42.186 ************************************ 00:04:42.186 END TEST env_memory 00:04:42.186 00:04:42.186 real 0m0.222s 00:04:42.186 user 0m0.203s 00:04:42.186 sys 0m0.012s 00:04:42.186 12:48:54 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.186 12:48:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.186 ************************************ 00:04:42.186 12:48:54 env -- common/autotest_common.sh@1142 -- # return 0 00:04:42.186 12:48:54 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.186 12:48:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.186 12:48:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.186 12:48:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.186 ************************************ 00:04:42.186 START TEST env_vtophys 00:04:42.186 ************************************ 00:04:42.186 12:48:54 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.186 EAL: lib.eal log level changed from notice to debug 00:04:42.186 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.186 EAL: Detected lcore 1 as core 0 on socket 0 00:04:42.186 EAL: Detected lcore 2 as core 0 on socket 0 00:04:42.186 EAL: Detected lcore 3 as core 0 on socket 0 00:04:42.186 EAL: Detected lcore 4 as core 0 on socket 0 00:04:42.186 EAL: Detected lcore 5 as core 0 on socket 0 00:04:42.186 EAL: Detected lcore 6 as core 0 on socket 0 00:04:42.186 EAL: Detected lcore 7 as core 0 on socket 0 00:04:42.186 EAL: Detected lcore 8 as core 0 on socket 0 00:04:42.186 EAL: Detected lcore 9 as core 0 on socket 0 00:04:42.444 EAL: Maximum logical cores by configuration: 128 00:04:42.445 EAL: Detected CPU lcores: 10 00:04:42.445 EAL: Detected NUMA nodes: 1 00:04:42.445 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:42.445 EAL: Detected shared linkage of DPDK 00:04:42.445 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.445 EAL: Selected IOVA mode 'PA' 00:04:42.445 EAL: Probing VFIO support... 00:04:42.445 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.445 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:42.445 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.445 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.445 EAL: Setting up physically contiguous memory... 00:04:42.445 EAL: Setting maximum number of open files to 524288 00:04:42.445 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.445 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.445 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.445 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.445 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.445 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.445 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.445 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.445 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.445 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.445 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.445 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.445 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.445 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.445 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.445 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.445 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.445 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.445 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.445 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.445 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.445 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.445 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.445 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.445 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.445 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.445 EAL: Hugepages will be freed exactly as allocated. 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: TSC frequency is ~2200000 KHz 00:04:42.445 EAL: Main lcore 0 is ready (tid=7f738381ba00;cpuset=[0]) 00:04:42.445 EAL: Trying to obtain current memory policy. 00:04:42.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.445 EAL: Restoring previous memory policy: 0 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.445 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.445 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.445 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.445 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:42.445 00:04:42.445 00:04:42.445 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.445 http://cunit.sourceforge.net/ 00:04:42.445 00:04:42.445 00:04:42.445 Suite: components_suite 00:04:42.445 Test: vtophys_malloc_test ...passed 00:04:42.445 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.445 EAL: Restoring previous memory policy: 4 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.445 EAL: Trying to obtain current memory policy. 00:04:42.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.445 EAL: Restoring previous memory policy: 4 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.445 EAL: Trying to obtain current memory policy. 00:04:42.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.445 EAL: Restoring previous memory policy: 4 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.445 EAL: Trying to obtain current memory policy. 00:04:42.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.445 EAL: Restoring previous memory policy: 4 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.445 EAL: Trying to obtain current memory policy. 00:04:42.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.445 EAL: Restoring previous memory policy: 4 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.445 EAL: Trying to obtain current memory policy. 00:04:42.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.445 EAL: Restoring previous memory policy: 4 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.445 EAL: Trying to obtain current memory policy. 00:04:42.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.445 EAL: Restoring previous memory policy: 4 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.445 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.445 EAL: request: mp_malloc_sync 00:04:42.445 EAL: No shared files mode enabled, IPC is disabled 00:04:42.445 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.445 EAL: Trying to obtain current memory policy. 00:04:42.445 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.702 EAL: Restoring previous memory policy: 4 00:04:42.702 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.702 EAL: request: mp_malloc_sync 00:04:42.702 EAL: No shared files mode enabled, IPC is disabled 00:04:42.702 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.702 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.702 EAL: request: mp_malloc_sync 00:04:42.702 EAL: No shared files mode enabled, IPC is disabled 00:04:42.702 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.702 EAL: Trying to obtain current memory policy. 00:04:42.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.702 EAL: Restoring previous memory policy: 4 00:04:42.702 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.702 EAL: request: mp_malloc_sync 00:04:42.702 EAL: No shared files mode enabled, IPC is disabled 00:04:42.702 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.702 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.960 EAL: request: mp_malloc_sync 00:04:42.960 EAL: No shared files mode enabled, IPC is disabled 00:04:42.960 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.960 EAL: Trying to obtain current memory policy. 00:04:42.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.960 EAL: Restoring previous memory policy: 4 00:04:42.960 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.960 EAL: request: mp_malloc_sync 00:04:42.960 EAL: No shared files mode enabled, IPC is disabled 00:04:42.960 EAL: Heap on socket 0 was expanded by 1026MB 00:04:43.217 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.218 passed 00:04:43.218 00:04:43.218 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.218 suites 1 1 n/a 0 0 00:04:43.218 tests 2 2 2 0 0 00:04:43.218 asserts 5386 5386 5386 0 n/a 00:04:43.218 00:04:43.218 Elapsed time = 0.722 seconds 00:04:43.218 EAL: request: mp_malloc_sync 00:04:43.218 EAL: No shared files mode enabled, IPC is disabled 00:04:43.218 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:43.218 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.218 EAL: request: mp_malloc_sync 00:04:43.218 EAL: No shared files mode enabled, IPC is disabled 00:04:43.218 EAL: Heap on socket 0 was shrunk by 2MB 00:04:43.218 EAL: No shared files mode enabled, IPC is disabled 00:04:43.218 EAL: No shared files mode enabled, IPC is disabled 00:04:43.218 EAL: No shared files mode enabled, IPC is disabled 00:04:43.218 ************************************ 00:04:43.218 END TEST env_vtophys 00:04:43.218 ************************************ 00:04:43.218 00:04:43.218 real 0m0.926s 00:04:43.218 user 0m0.463s 00:04:43.218 sys 0m0.328s 00:04:43.218 12:48:55 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.218 12:48:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:43.218 12:48:55 env -- common/autotest_common.sh@1142 -- # return 0 00:04:43.218 12:48:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:43.218 12:48:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.218 12:48:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.218 12:48:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.218 ************************************ 00:04:43.218 START TEST env_pci 00:04:43.218 ************************************ 00:04:43.218 12:48:55 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:43.218 00:04:43.218 00:04:43.218 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.218 http://cunit.sourceforge.net/ 00:04:43.218 00:04:43.218 00:04:43.218 Suite: pci 00:04:43.218 Test: pci_hook ...[2024-07-15 12:48:55.603108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60645 has claimed it 00:04:43.218 passed 00:04:43.218 00:04:43.218 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.218 suites 1 1 n/a 0 0 00:04:43.218 tests 1 1 1 0 0 00:04:43.218 asserts 25 25 25 0 n/a 00:04:43.218 00:04:43.218 Elapsed time = 0.002 seconds 00:04:43.218 EAL: Cannot find device (10000:00:01.0) 00:04:43.218 EAL: Failed to attach device on primary process 00:04:43.218 ************************************ 00:04:43.218 END TEST env_pci 00:04:43.218 ************************************ 00:04:43.218 00:04:43.218 real 0m0.020s 00:04:43.218 user 0m0.008s 00:04:43.218 sys 0m0.012s 00:04:43.218 12:48:55 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.218 12:48:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:43.218 12:48:55 env -- common/autotest_common.sh@1142 -- # return 0 00:04:43.218 12:48:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:43.218 12:48:55 env -- env/env.sh@15 -- # uname 00:04:43.218 12:48:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:43.218 12:48:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:43.218 12:48:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.218 12:48:55 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:43.218 12:48:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.218 12:48:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.218 ************************************ 00:04:43.218 START TEST env_dpdk_post_init 00:04:43.218 ************************************ 00:04:43.218 12:48:55 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.476 EAL: Detected CPU lcores: 10 00:04:43.476 EAL: Detected NUMA nodes: 1 00:04:43.476 EAL: Detected shared linkage of DPDK 00:04:43.476 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.476 EAL: Selected IOVA mode 'PA' 00:04:43.476 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.476 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:43.476 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:43.476 Starting DPDK initialization... 00:04:43.476 Starting SPDK post initialization... 00:04:43.476 SPDK NVMe probe 00:04:43.476 Attaching to 0000:00:10.0 00:04:43.476 Attaching to 0000:00:11.0 00:04:43.476 Attached to 0000:00:10.0 00:04:43.476 Attached to 0000:00:11.0 00:04:43.476 Cleaning up... 00:04:43.476 00:04:43.476 real 0m0.185s 00:04:43.476 user 0m0.045s 00:04:43.476 sys 0m0.039s 00:04:43.476 12:48:55 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.476 12:48:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.476 ************************************ 00:04:43.476 END TEST env_dpdk_post_init 00:04:43.476 ************************************ 00:04:43.476 12:48:55 env -- common/autotest_common.sh@1142 -- # return 0 00:04:43.476 12:48:55 env -- env/env.sh@26 -- # uname 00:04:43.476 12:48:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:43.476 12:48:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.476 12:48:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.476 12:48:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.476 12:48:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.476 ************************************ 00:04:43.476 START TEST env_mem_callbacks 00:04:43.476 ************************************ 00:04:43.476 12:48:55 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.476 EAL: Detected CPU lcores: 10 00:04:43.476 EAL: Detected NUMA nodes: 1 00:04:43.477 EAL: Detected shared linkage of DPDK 00:04:43.477 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.477 EAL: Selected IOVA mode 'PA' 00:04:43.735 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.735 00:04:43.735 00:04:43.735 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.735 http://cunit.sourceforge.net/ 00:04:43.735 00:04:43.735 00:04:43.735 Suite: memory 00:04:43.735 Test: test ... 00:04:43.735 register 0x200000200000 2097152 00:04:43.735 malloc 3145728 00:04:43.735 register 0x200000400000 4194304 00:04:43.735 buf 0x200000500000 len 3145728 PASSED 00:04:43.735 malloc 64 00:04:43.735 buf 0x2000004fff40 len 64 PASSED 00:04:43.735 malloc 4194304 00:04:43.735 register 0x200000800000 6291456 00:04:43.735 buf 0x200000a00000 len 4194304 PASSED 00:04:43.735 free 0x200000500000 3145728 00:04:43.735 free 0x2000004fff40 64 00:04:43.735 unregister 0x200000400000 4194304 PASSED 00:04:43.735 free 0x200000a00000 4194304 00:04:43.735 unregister 0x200000800000 6291456 PASSED 00:04:43.735 malloc 8388608 00:04:43.735 register 0x200000400000 10485760 00:04:43.735 buf 0x200000600000 len 8388608 PASSED 00:04:43.735 free 0x200000600000 8388608 00:04:43.735 unregister 0x200000400000 10485760 PASSED 00:04:43.735 passed 00:04:43.735 00:04:43.735 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.735 suites 1 1 n/a 0 0 00:04:43.735 tests 1 1 1 0 0 00:04:43.735 asserts 15 15 15 0 n/a 00:04:43.735 00:04:43.735 Elapsed time = 0.008 seconds 00:04:43.735 00:04:43.735 real 0m0.146s 00:04:43.735 user 0m0.020s 00:04:43.735 sys 0m0.022s 00:04:43.735 12:48:56 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.735 12:48:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:43.735 ************************************ 00:04:43.735 END TEST env_mem_callbacks 00:04:43.735 ************************************ 00:04:43.735 12:48:56 env -- common/autotest_common.sh@1142 -- # return 0 00:04:43.735 00:04:43.735 real 0m1.787s 00:04:43.735 user 0m0.847s 00:04:43.735 sys 0m0.584s 00:04:43.735 12:48:56 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.735 12:48:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.735 ************************************ 00:04:43.735 END TEST env 00:04:43.735 ************************************ 00:04:43.735 12:48:56 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.735 12:48:56 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:43.735 12:48:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.735 12:48:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.735 12:48:56 -- common/autotest_common.sh@10 -- # set +x 00:04:43.735 ************************************ 00:04:43.735 START TEST rpc 00:04:43.735 ************************************ 00:04:43.735 12:48:56 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:43.735 * Looking for test storage... 00:04:43.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.735 12:48:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60760 00:04:43.735 12:48:56 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:43.735 12:48:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.735 12:48:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60760 00:04:43.735 12:48:56 rpc -- common/autotest_common.sh@829 -- # '[' -z 60760 ']' 00:04:43.735 12:48:56 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.735 12:48:56 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.735 12:48:56 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.735 12:48:56 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.735 12:48:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.993 [2024-07-15 12:48:56.281263] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:04:43.993 [2024-07-15 12:48:56.281407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60760 ] 00:04:43.993 [2024-07-15 12:48:56.420473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.252 [2024-07-15 12:48:56.497685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:44.252 [2024-07-15 12:48:56.497751] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60760' to capture a snapshot of events at runtime. 00:04:44.252 [2024-07-15 12:48:56.497786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:44.252 [2024-07-15 12:48:56.497803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:44.252 [2024-07-15 12:48:56.497815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60760 for offline analysis/debug. 00:04:44.252 [2024-07-15 12:48:56.497851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.819 12:48:57 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.819 12:48:57 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.819 12:48:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.819 12:48:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.819 12:48:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:44.819 12:48:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:44.819 12:48:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.819 12:48:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.819 12:48:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.819 ************************************ 00:04:44.819 START TEST rpc_integrity 00:04:44.819 ************************************ 00:04:44.819 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:44.819 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.819 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.819 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.819 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.819 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.819 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.078 { 00:04:45.078 "aliases": [ 00:04:45.078 "15caad96-eb42-434d-8c27-02a5e74fc0be" 00:04:45.078 ], 00:04:45.078 "assigned_rate_limits": { 00:04:45.078 "r_mbytes_per_sec": 0, 00:04:45.078 "rw_ios_per_sec": 0, 00:04:45.078 "rw_mbytes_per_sec": 0, 00:04:45.078 "w_mbytes_per_sec": 0 00:04:45.078 }, 00:04:45.078 "block_size": 512, 00:04:45.078 "claimed": false, 00:04:45.078 "driver_specific": {}, 00:04:45.078 "memory_domains": [ 00:04:45.078 { 00:04:45.078 "dma_device_id": "system", 00:04:45.078 "dma_device_type": 1 00:04:45.078 }, 00:04:45.078 { 00:04:45.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.078 "dma_device_type": 2 00:04:45.078 } 00:04:45.078 ], 00:04:45.078 "name": "Malloc0", 00:04:45.078 "num_blocks": 16384, 00:04:45.078 "product_name": "Malloc disk", 00:04:45.078 "supported_io_types": { 00:04:45.078 "abort": true, 00:04:45.078 "compare": false, 00:04:45.078 "compare_and_write": false, 00:04:45.078 "copy": true, 00:04:45.078 "flush": true, 00:04:45.078 "get_zone_info": false, 00:04:45.078 "nvme_admin": false, 00:04:45.078 "nvme_io": false, 00:04:45.078 "nvme_io_md": false, 00:04:45.078 "nvme_iov_md": false, 00:04:45.078 "read": true, 00:04:45.078 "reset": true, 00:04:45.078 "seek_data": false, 00:04:45.078 "seek_hole": false, 00:04:45.078 "unmap": true, 00:04:45.078 "write": true, 00:04:45.078 "write_zeroes": true, 00:04:45.078 "zcopy": true, 00:04:45.078 "zone_append": false, 00:04:45.078 "zone_management": false 00:04:45.078 }, 00:04:45.078 "uuid": "15caad96-eb42-434d-8c27-02a5e74fc0be", 00:04:45.078 "zoned": false 00:04:45.078 } 00:04:45.078 ]' 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.078 [2024-07-15 12:48:57.392143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:45.078 [2024-07-15 12:48:57.392213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.078 [2024-07-15 12:48:57.392244] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21f3af0 00:04:45.078 [2024-07-15 12:48:57.392258] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.078 [2024-07-15 12:48:57.394012] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.078 [2024-07-15 12:48:57.394054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.078 Passthru0 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.078 { 00:04:45.078 "aliases": [ 00:04:45.078 "15caad96-eb42-434d-8c27-02a5e74fc0be" 00:04:45.078 ], 00:04:45.078 "assigned_rate_limits": { 00:04:45.078 "r_mbytes_per_sec": 0, 00:04:45.078 "rw_ios_per_sec": 0, 00:04:45.078 "rw_mbytes_per_sec": 0, 00:04:45.078 "w_mbytes_per_sec": 0 00:04:45.078 }, 00:04:45.078 "block_size": 512, 00:04:45.078 "claim_type": "exclusive_write", 00:04:45.078 "claimed": true, 00:04:45.078 "driver_specific": {}, 00:04:45.078 "memory_domains": [ 00:04:45.078 { 00:04:45.078 "dma_device_id": "system", 00:04:45.078 "dma_device_type": 1 00:04:45.078 }, 00:04:45.078 { 00:04:45.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.078 "dma_device_type": 2 00:04:45.078 } 00:04:45.078 ], 00:04:45.078 "name": "Malloc0", 00:04:45.078 "num_blocks": 16384, 00:04:45.078 "product_name": "Malloc disk", 00:04:45.078 "supported_io_types": { 00:04:45.078 "abort": true, 00:04:45.078 "compare": false, 00:04:45.078 "compare_and_write": false, 00:04:45.078 "copy": true, 00:04:45.078 "flush": true, 00:04:45.078 "get_zone_info": false, 00:04:45.078 "nvme_admin": false, 00:04:45.078 "nvme_io": false, 00:04:45.078 "nvme_io_md": false, 00:04:45.078 "nvme_iov_md": false, 00:04:45.078 "read": true, 00:04:45.078 "reset": true, 00:04:45.078 "seek_data": false, 00:04:45.078 "seek_hole": false, 00:04:45.078 "unmap": true, 00:04:45.078 "write": true, 00:04:45.078 "write_zeroes": true, 00:04:45.078 "zcopy": true, 00:04:45.078 "zone_append": false, 00:04:45.078 "zone_management": false 00:04:45.078 }, 00:04:45.078 "uuid": "15caad96-eb42-434d-8c27-02a5e74fc0be", 00:04:45.078 "zoned": false 00:04:45.078 }, 00:04:45.078 { 00:04:45.078 "aliases": [ 00:04:45.078 "f231832f-91f6-5383-9abc-f2794cd6c5c1" 00:04:45.078 ], 00:04:45.078 "assigned_rate_limits": { 00:04:45.078 "r_mbytes_per_sec": 0, 00:04:45.078 "rw_ios_per_sec": 0, 00:04:45.078 "rw_mbytes_per_sec": 0, 00:04:45.078 "w_mbytes_per_sec": 0 00:04:45.078 }, 00:04:45.078 "block_size": 512, 00:04:45.078 "claimed": false, 00:04:45.078 "driver_specific": { 00:04:45.078 "passthru": { 00:04:45.078 "base_bdev_name": "Malloc0", 00:04:45.078 "name": "Passthru0" 00:04:45.078 } 00:04:45.078 }, 00:04:45.078 "memory_domains": [ 00:04:45.078 { 00:04:45.078 "dma_device_id": "system", 00:04:45.078 "dma_device_type": 1 00:04:45.078 }, 00:04:45.078 { 00:04:45.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.078 "dma_device_type": 2 00:04:45.078 } 00:04:45.078 ], 00:04:45.078 "name": "Passthru0", 00:04:45.078 "num_blocks": 16384, 00:04:45.078 "product_name": "passthru", 00:04:45.078 "supported_io_types": { 00:04:45.078 "abort": true, 00:04:45.078 "compare": false, 00:04:45.078 "compare_and_write": false, 00:04:45.078 "copy": true, 00:04:45.078 "flush": true, 00:04:45.078 "get_zone_info": false, 00:04:45.078 "nvme_admin": false, 00:04:45.078 "nvme_io": false, 00:04:45.078 "nvme_io_md": false, 00:04:45.078 "nvme_iov_md": false, 00:04:45.078 "read": true, 00:04:45.078 "reset": true, 00:04:45.078 "seek_data": false, 00:04:45.078 "seek_hole": false, 00:04:45.078 "unmap": true, 00:04:45.078 "write": true, 00:04:45.078 "write_zeroes": true, 00:04:45.078 "zcopy": true, 00:04:45.078 "zone_append": false, 00:04:45.078 "zone_management": false 00:04:45.078 }, 00:04:45.078 "uuid": "f231832f-91f6-5383-9abc-f2794cd6c5c1", 00:04:45.078 "zoned": false 00:04:45.078 } 00:04:45.078 ]' 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.078 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:45.078 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:45.337 ************************************ 00:04:45.337 END TEST rpc_integrity 00:04:45.337 ************************************ 00:04:45.337 12:48:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.337 00:04:45.337 real 0m0.346s 00:04:45.337 user 0m0.239s 00:04:45.337 sys 0m0.032s 00:04:45.337 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.337 12:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.337 12:48:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:45.338 12:48:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:45.338 12:48:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.338 12:48:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.338 12:48:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.338 ************************************ 00:04:45.338 START TEST rpc_plugins 00:04:45.338 ************************************ 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:45.338 { 00:04:45.338 "aliases": [ 00:04:45.338 "e7b89858-7e69-412e-bf13-8357ee49ccec" 00:04:45.338 ], 00:04:45.338 "assigned_rate_limits": { 00:04:45.338 "r_mbytes_per_sec": 0, 00:04:45.338 "rw_ios_per_sec": 0, 00:04:45.338 "rw_mbytes_per_sec": 0, 00:04:45.338 "w_mbytes_per_sec": 0 00:04:45.338 }, 00:04:45.338 "block_size": 4096, 00:04:45.338 "claimed": false, 00:04:45.338 "driver_specific": {}, 00:04:45.338 "memory_domains": [ 00:04:45.338 { 00:04:45.338 "dma_device_id": "system", 00:04:45.338 "dma_device_type": 1 00:04:45.338 }, 00:04:45.338 { 00:04:45.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.338 "dma_device_type": 2 00:04:45.338 } 00:04:45.338 ], 00:04:45.338 "name": "Malloc1", 00:04:45.338 "num_blocks": 256, 00:04:45.338 "product_name": "Malloc disk", 00:04:45.338 "supported_io_types": { 00:04:45.338 "abort": true, 00:04:45.338 "compare": false, 00:04:45.338 "compare_and_write": false, 00:04:45.338 "copy": true, 00:04:45.338 "flush": true, 00:04:45.338 "get_zone_info": false, 00:04:45.338 "nvme_admin": false, 00:04:45.338 "nvme_io": false, 00:04:45.338 "nvme_io_md": false, 00:04:45.338 "nvme_iov_md": false, 00:04:45.338 "read": true, 00:04:45.338 "reset": true, 00:04:45.338 "seek_data": false, 00:04:45.338 "seek_hole": false, 00:04:45.338 "unmap": true, 00:04:45.338 "write": true, 00:04:45.338 "write_zeroes": true, 00:04:45.338 "zcopy": true, 00:04:45.338 "zone_append": false, 00:04:45.338 "zone_management": false 00:04:45.338 }, 00:04:45.338 "uuid": "e7b89858-7e69-412e-bf13-8357ee49ccec", 00:04:45.338 "zoned": false 00:04:45.338 } 00:04:45.338 ]' 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:45.338 ************************************ 00:04:45.338 END TEST rpc_plugins 00:04:45.338 ************************************ 00:04:45.338 12:48:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:45.338 00:04:45.338 real 0m0.162s 00:04:45.338 user 0m0.110s 00:04:45.338 sys 0m0.016s 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.338 12:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.597 12:48:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:45.597 12:48:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:45.597 12:48:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.597 12:48:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.597 12:48:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.597 ************************************ 00:04:45.597 START TEST rpc_trace_cmd_test 00:04:45.597 ************************************ 00:04:45.597 12:48:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:45.597 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:45.597 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:45.597 12:48:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.597 12:48:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.597 12:48:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.597 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:45.597 "bdev": { 00:04:45.597 "mask": "0x8", 00:04:45.597 "tpoint_mask": "0xffffffffffffffff" 00:04:45.597 }, 00:04:45.597 "bdev_nvme": { 00:04:45.597 "mask": "0x4000", 00:04:45.597 "tpoint_mask": "0x0" 00:04:45.597 }, 00:04:45.597 "blobfs": { 00:04:45.597 "mask": "0x80", 00:04:45.597 "tpoint_mask": "0x0" 00:04:45.597 }, 00:04:45.597 "dsa": { 00:04:45.597 "mask": "0x200", 00:04:45.597 "tpoint_mask": "0x0" 00:04:45.597 }, 00:04:45.597 "ftl": { 00:04:45.597 "mask": "0x40", 00:04:45.597 "tpoint_mask": "0x0" 00:04:45.597 }, 00:04:45.597 "iaa": { 00:04:45.597 "mask": "0x1000", 00:04:45.597 "tpoint_mask": "0x0" 00:04:45.597 }, 00:04:45.597 "iscsi_conn": { 00:04:45.597 "mask": "0x2", 00:04:45.597 "tpoint_mask": "0x0" 00:04:45.597 }, 00:04:45.597 "nvme_pcie": { 00:04:45.597 "mask": "0x800", 00:04:45.597 "tpoint_mask": "0x0" 00:04:45.597 }, 00:04:45.597 "nvme_tcp": { 00:04:45.597 "mask": "0x2000", 00:04:45.597 "tpoint_mask": "0x0" 00:04:45.597 }, 00:04:45.597 "nvmf_rdma": { 00:04:45.597 "mask": "0x10", 00:04:45.597 "tpoint_mask": "0x0" 00:04:45.597 }, 00:04:45.597 "nvmf_tcp": { 00:04:45.597 "mask": "0x20", 00:04:45.598 "tpoint_mask": "0x0" 00:04:45.598 }, 00:04:45.598 "scsi": { 00:04:45.598 "mask": "0x4", 00:04:45.598 "tpoint_mask": "0x0" 00:04:45.598 }, 00:04:45.598 "sock": { 00:04:45.598 "mask": "0x8000", 00:04:45.598 "tpoint_mask": "0x0" 00:04:45.598 }, 00:04:45.598 "thread": { 00:04:45.598 "mask": "0x400", 00:04:45.598 "tpoint_mask": "0x0" 00:04:45.598 }, 00:04:45.598 "tpoint_group_mask": "0x8", 00:04:45.598 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60760" 00:04:45.598 }' 00:04:45.598 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:45.598 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:45.598 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:45.598 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:45.598 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.598 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.598 12:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.598 12:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.598 12:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.860 ************************************ 00:04:45.860 END TEST rpc_trace_cmd_test 00:04:45.860 ************************************ 00:04:45.860 12:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.860 00:04:45.860 real 0m0.278s 00:04:45.860 user 0m0.239s 00:04:45.860 sys 0m0.025s 00:04:45.860 12:48:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.860 12:48:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.860 12:48:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:45.860 12:48:58 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:45.860 12:48:58 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:45.860 12:48:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.860 12:48:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.860 12:48:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.860 ************************************ 00:04:45.860 START TEST go_rpc 00:04:45.860 ************************************ 00:04:45.860 12:48:58 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:04:45.860 12:48:58 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:45.860 12:48:58 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:45.860 12:48:58 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:45.860 12:48:58 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:45.860 12:48:58 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.861 12:48:58 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.861 12:48:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.861 12:48:58 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.861 12:48:58 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:45.861 12:48:58 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:45.861 12:48:58 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["a65b14d0-3f6a-4b0e-aa3f-af2c668a9555"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"a65b14d0-3f6a-4b0e-aa3f-af2c668a9555","zoned":false}]' 00:04:45.861 12:48:58 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:45.861 12:48:58 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:45.861 12:48:58 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:45.861 12:48:58 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.861 12:48:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.861 12:48:58 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.861 12:48:58 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:45.861 12:48:58 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:45.861 12:48:58 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:46.160 ************************************ 00:04:46.160 END TEST go_rpc 00:04:46.160 ************************************ 00:04:46.160 12:48:58 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:46.160 00:04:46.160 real 0m0.233s 00:04:46.160 user 0m0.172s 00:04:46.160 sys 0m0.031s 00:04:46.160 12:48:58 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.160 12:48:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.160 12:48:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.160 12:48:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:46.160 12:48:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:46.160 12:48:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.160 12:48:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.160 12:48:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.160 ************************************ 00:04:46.160 START TEST rpc_daemon_integrity 00:04:46.160 ************************************ 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.160 { 00:04:46.160 "aliases": [ 00:04:46.160 "3c6e97cb-fa08-4d46-ac60-753abddf7fcf" 00:04:46.160 ], 00:04:46.160 "assigned_rate_limits": { 00:04:46.160 "r_mbytes_per_sec": 0, 00:04:46.160 "rw_ios_per_sec": 0, 00:04:46.160 "rw_mbytes_per_sec": 0, 00:04:46.160 "w_mbytes_per_sec": 0 00:04:46.160 }, 00:04:46.160 "block_size": 512, 00:04:46.160 "claimed": false, 00:04:46.160 "driver_specific": {}, 00:04:46.160 "memory_domains": [ 00:04:46.160 { 00:04:46.160 "dma_device_id": "system", 00:04:46.160 "dma_device_type": 1 00:04:46.160 }, 00:04:46.160 { 00:04:46.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.160 "dma_device_type": 2 00:04:46.160 } 00:04:46.160 ], 00:04:46.160 "name": "Malloc3", 00:04:46.160 "num_blocks": 16384, 00:04:46.160 "product_name": "Malloc disk", 00:04:46.160 "supported_io_types": { 00:04:46.160 "abort": true, 00:04:46.160 "compare": false, 00:04:46.160 "compare_and_write": false, 00:04:46.160 "copy": true, 00:04:46.160 "flush": true, 00:04:46.160 "get_zone_info": false, 00:04:46.160 "nvme_admin": false, 00:04:46.160 "nvme_io": false, 00:04:46.160 "nvme_io_md": false, 00:04:46.160 "nvme_iov_md": false, 00:04:46.160 "read": true, 00:04:46.160 "reset": true, 00:04:46.160 "seek_data": false, 00:04:46.160 "seek_hole": false, 00:04:46.160 "unmap": true, 00:04:46.160 "write": true, 00:04:46.160 "write_zeroes": true, 00:04:46.160 "zcopy": true, 00:04:46.160 "zone_append": false, 00:04:46.160 "zone_management": false 00:04:46.160 }, 00:04:46.160 "uuid": "3c6e97cb-fa08-4d46-ac60-753abddf7fcf", 00:04:46.160 "zoned": false 00:04:46.160 } 00:04:46.160 ]' 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.160 [2024-07-15 12:48:58.572550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:46.160 [2024-07-15 12:48:58.572615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.160 [2024-07-15 12:48:58.572639] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2335110 00:04:46.160 [2024-07-15 12:48:58.572648] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.160 [2024-07-15 12:48:58.574106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.160 [2024-07-15 12:48:58.574144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.160 Passthru0 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.160 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.160 { 00:04:46.160 "aliases": [ 00:04:46.160 "3c6e97cb-fa08-4d46-ac60-753abddf7fcf" 00:04:46.160 ], 00:04:46.160 "assigned_rate_limits": { 00:04:46.160 "r_mbytes_per_sec": 0, 00:04:46.160 "rw_ios_per_sec": 0, 00:04:46.160 "rw_mbytes_per_sec": 0, 00:04:46.160 "w_mbytes_per_sec": 0 00:04:46.160 }, 00:04:46.160 "block_size": 512, 00:04:46.160 "claim_type": "exclusive_write", 00:04:46.160 "claimed": true, 00:04:46.160 "driver_specific": {}, 00:04:46.160 "memory_domains": [ 00:04:46.160 { 00:04:46.160 "dma_device_id": "system", 00:04:46.160 "dma_device_type": 1 00:04:46.160 }, 00:04:46.160 { 00:04:46.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.160 "dma_device_type": 2 00:04:46.160 } 00:04:46.160 ], 00:04:46.160 "name": "Malloc3", 00:04:46.160 "num_blocks": 16384, 00:04:46.160 "product_name": "Malloc disk", 00:04:46.160 "supported_io_types": { 00:04:46.160 "abort": true, 00:04:46.160 "compare": false, 00:04:46.160 "compare_and_write": false, 00:04:46.160 "copy": true, 00:04:46.160 "flush": true, 00:04:46.160 "get_zone_info": false, 00:04:46.161 "nvme_admin": false, 00:04:46.161 "nvme_io": false, 00:04:46.161 "nvme_io_md": false, 00:04:46.161 "nvme_iov_md": false, 00:04:46.161 "read": true, 00:04:46.161 "reset": true, 00:04:46.161 "seek_data": false, 00:04:46.161 "seek_hole": false, 00:04:46.161 "unmap": true, 00:04:46.161 "write": true, 00:04:46.161 "write_zeroes": true, 00:04:46.161 "zcopy": true, 00:04:46.161 "zone_append": false, 00:04:46.161 "zone_management": false 00:04:46.161 }, 00:04:46.161 "uuid": "3c6e97cb-fa08-4d46-ac60-753abddf7fcf", 00:04:46.161 "zoned": false 00:04:46.161 }, 00:04:46.161 { 00:04:46.161 "aliases": [ 00:04:46.161 "0ffb36b0-c9f6-5468-a597-f5b3da9bae1f" 00:04:46.161 ], 00:04:46.161 "assigned_rate_limits": { 00:04:46.161 "r_mbytes_per_sec": 0, 00:04:46.161 "rw_ios_per_sec": 0, 00:04:46.161 "rw_mbytes_per_sec": 0, 00:04:46.161 "w_mbytes_per_sec": 0 00:04:46.161 }, 00:04:46.161 "block_size": 512, 00:04:46.161 "claimed": false, 00:04:46.161 "driver_specific": { 00:04:46.161 "passthru": { 00:04:46.161 "base_bdev_name": "Malloc3", 00:04:46.161 "name": "Passthru0" 00:04:46.161 } 00:04:46.161 }, 00:04:46.161 "memory_domains": [ 00:04:46.161 { 00:04:46.161 "dma_device_id": "system", 00:04:46.161 "dma_device_type": 1 00:04:46.161 }, 00:04:46.161 { 00:04:46.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.161 "dma_device_type": 2 00:04:46.161 } 00:04:46.161 ], 00:04:46.161 "name": "Passthru0", 00:04:46.161 "num_blocks": 16384, 00:04:46.161 "product_name": "passthru", 00:04:46.161 "supported_io_types": { 00:04:46.161 "abort": true, 00:04:46.161 "compare": false, 00:04:46.161 "compare_and_write": false, 00:04:46.161 "copy": true, 00:04:46.161 "flush": true, 00:04:46.161 "get_zone_info": false, 00:04:46.161 "nvme_admin": false, 00:04:46.161 "nvme_io": false, 00:04:46.161 "nvme_io_md": false, 00:04:46.161 "nvme_iov_md": false, 00:04:46.161 "read": true, 00:04:46.161 "reset": true, 00:04:46.161 "seek_data": false, 00:04:46.161 "seek_hole": false, 00:04:46.161 "unmap": true, 00:04:46.161 "write": true, 00:04:46.161 "write_zeroes": true, 00:04:46.161 "zcopy": true, 00:04:46.161 "zone_append": false, 00:04:46.161 "zone_management": false 00:04:46.161 }, 00:04:46.161 "uuid": "0ffb36b0-c9f6-5468-a597-f5b3da9bae1f", 00:04:46.161 "zoned": false 00:04:46.161 } 00:04:46.161 ]' 00:04:46.161 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.418 ************************************ 00:04:46.418 END TEST rpc_daemon_integrity 00:04:46.418 ************************************ 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.418 00:04:46.418 real 0m0.323s 00:04:46.418 user 0m0.217s 00:04:46.418 sys 0m0.033s 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.418 12:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.418 12:48:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.418 12:48:58 rpc -- rpc/rpc.sh@84 -- # killprocess 60760 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@948 -- # '[' -z 60760 ']' 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@952 -- # kill -0 60760 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@953 -- # uname 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60760 00:04:46.418 killing process with pid 60760 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60760' 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@967 -- # kill 60760 00:04:46.418 12:48:58 rpc -- common/autotest_common.sh@972 -- # wait 60760 00:04:46.675 00:04:46.675 real 0m2.949s 00:04:46.675 user 0m4.098s 00:04:46.675 sys 0m0.574s 00:04:46.675 ************************************ 00:04:46.675 END TEST rpc 00:04:46.675 ************************************ 00:04:46.675 12:48:59 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.675 12:48:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.675 12:48:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.675 12:48:59 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:46.675 12:48:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.675 12:48:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.675 12:48:59 -- common/autotest_common.sh@10 -- # set +x 00:04:46.675 ************************************ 00:04:46.675 START TEST skip_rpc 00:04:46.675 ************************************ 00:04:46.675 12:48:59 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:46.934 * Looking for test storage... 00:04:46.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.934 12:48:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:46.934 12:48:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.934 12:48:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:46.934 12:48:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.934 12:48:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.934 12:48:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.934 ************************************ 00:04:46.934 START TEST skip_rpc 00:04:46.934 ************************************ 00:04:46.934 12:48:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:46.934 12:48:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=61021 00:04:46.935 12:48:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.935 12:48:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:46.935 12:48:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:46.935 [2024-07-15 12:48:59.251371] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:04:46.935 [2024-07-15 12:48:59.251477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61021 ] 00:04:46.935 [2024-07-15 12:48:59.383052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.194 [2024-07-15 12:48:59.470193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.462 2024/07/15 12:49:04 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 61021 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 61021 ']' 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 61021 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61021 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61021' 00:04:52.462 killing process with pid 61021 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 61021 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 61021 00:04:52.462 00:04:52.462 ************************************ 00:04:52.462 END TEST skip_rpc 00:04:52.462 ************************************ 00:04:52.462 real 0m5.296s 00:04:52.462 user 0m5.015s 00:04:52.462 sys 0m0.172s 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.462 12:49:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.462 12:49:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:52.462 12:49:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.462 12:49:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.462 12:49:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.462 12:49:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.463 ************************************ 00:04:52.463 START TEST skip_rpc_with_json 00:04:52.463 ************************************ 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61108 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61108 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61108 ']' 00:04:52.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.463 12:49:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.463 [2024-07-15 12:49:04.610682] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:04:52.463 [2024-07-15 12:49:04.610837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61108 ] 00:04:52.463 [2024-07-15 12:49:04.749673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.463 [2024-07-15 12:49:04.835956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.400 [2024-07-15 12:49:05.680204] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:53.400 2024/07/15 12:49:05 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:53.400 request: 00:04:53.400 { 00:04:53.400 "method": "nvmf_get_transports", 00:04:53.400 "params": { 00:04:53.400 "trtype": "tcp" 00:04:53.400 } 00:04:53.400 } 00:04:53.400 Got JSON-RPC error response 00:04:53.400 GoRPCClient: error on JSON-RPC call 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.400 [2024-07-15 12:49:05.692309] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.400 12:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.400 { 00:04:53.400 "subsystems": [ 00:04:53.400 { 00:04:53.400 "subsystem": "keyring", 00:04:53.400 "config": [] 00:04:53.400 }, 00:04:53.400 { 00:04:53.400 "subsystem": "iobuf", 00:04:53.400 "config": [ 00:04:53.400 { 00:04:53.400 "method": "iobuf_set_options", 00:04:53.400 "params": { 00:04:53.400 "large_bufsize": 135168, 00:04:53.400 "large_pool_count": 1024, 00:04:53.400 "small_bufsize": 8192, 00:04:53.400 "small_pool_count": 8192 00:04:53.400 } 00:04:53.400 } 00:04:53.400 ] 00:04:53.400 }, 00:04:53.400 { 00:04:53.400 "subsystem": "sock", 00:04:53.400 "config": [ 00:04:53.400 { 00:04:53.400 "method": "sock_set_default_impl", 00:04:53.400 "params": { 00:04:53.400 "impl_name": "posix" 00:04:53.400 } 00:04:53.400 }, 00:04:53.400 { 00:04:53.400 "method": "sock_impl_set_options", 00:04:53.400 "params": { 00:04:53.400 "enable_ktls": false, 00:04:53.400 "enable_placement_id": 0, 00:04:53.400 "enable_quickack": false, 00:04:53.400 "enable_recv_pipe": true, 00:04:53.400 "enable_zerocopy_send_client": false, 00:04:53.400 "enable_zerocopy_send_server": true, 00:04:53.400 "impl_name": "ssl", 00:04:53.400 "recv_buf_size": 4096, 00:04:53.400 "send_buf_size": 4096, 00:04:53.400 "tls_version": 0, 00:04:53.400 "zerocopy_threshold": 0 00:04:53.400 } 00:04:53.400 }, 00:04:53.400 { 00:04:53.400 "method": "sock_impl_set_options", 00:04:53.400 "params": { 00:04:53.400 "enable_ktls": false, 00:04:53.400 "enable_placement_id": 0, 00:04:53.400 "enable_quickack": false, 00:04:53.400 "enable_recv_pipe": true, 00:04:53.401 "enable_zerocopy_send_client": false, 00:04:53.401 "enable_zerocopy_send_server": true, 00:04:53.401 "impl_name": "posix", 00:04:53.401 "recv_buf_size": 2097152, 00:04:53.401 "send_buf_size": 2097152, 00:04:53.401 "tls_version": 0, 00:04:53.401 "zerocopy_threshold": 0 00:04:53.401 } 00:04:53.401 } 00:04:53.401 ] 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "vmd", 00:04:53.401 "config": [] 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "accel", 00:04:53.401 "config": [ 00:04:53.401 { 00:04:53.401 "method": "accel_set_options", 00:04:53.401 "params": { 00:04:53.401 "buf_count": 2048, 00:04:53.401 "large_cache_size": 16, 00:04:53.401 "sequence_count": 2048, 00:04:53.401 "small_cache_size": 128, 00:04:53.401 "task_count": 2048 00:04:53.401 } 00:04:53.401 } 00:04:53.401 ] 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "bdev", 00:04:53.401 "config": [ 00:04:53.401 { 00:04:53.401 "method": "bdev_set_options", 00:04:53.401 "params": { 00:04:53.401 "bdev_auto_examine": true, 00:04:53.401 "bdev_io_cache_size": 256, 00:04:53.401 "bdev_io_pool_size": 65535, 00:04:53.401 "iobuf_large_cache_size": 16, 00:04:53.401 "iobuf_small_cache_size": 128 00:04:53.401 } 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "method": "bdev_raid_set_options", 00:04:53.401 "params": { 00:04:53.401 "process_window_size_kb": 1024 00:04:53.401 } 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "method": "bdev_iscsi_set_options", 00:04:53.401 "params": { 00:04:53.401 "timeout_sec": 30 00:04:53.401 } 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "method": "bdev_nvme_set_options", 00:04:53.401 "params": { 00:04:53.401 "action_on_timeout": "none", 00:04:53.401 "allow_accel_sequence": false, 00:04:53.401 "arbitration_burst": 0, 00:04:53.401 "bdev_retry_count": 3, 00:04:53.401 "ctrlr_loss_timeout_sec": 0, 00:04:53.401 "delay_cmd_submit": true, 00:04:53.401 "dhchap_dhgroups": [ 00:04:53.401 "null", 00:04:53.401 "ffdhe2048", 00:04:53.401 "ffdhe3072", 00:04:53.401 "ffdhe4096", 00:04:53.401 "ffdhe6144", 00:04:53.401 "ffdhe8192" 00:04:53.401 ], 00:04:53.401 "dhchap_digests": [ 00:04:53.401 "sha256", 00:04:53.401 "sha384", 00:04:53.401 "sha512" 00:04:53.401 ], 00:04:53.401 "disable_auto_failback": false, 00:04:53.401 "fast_io_fail_timeout_sec": 0, 00:04:53.401 "generate_uuids": false, 00:04:53.401 "high_priority_weight": 0, 00:04:53.401 "io_path_stat": false, 00:04:53.401 "io_queue_requests": 0, 00:04:53.401 "keep_alive_timeout_ms": 10000, 00:04:53.401 "low_priority_weight": 0, 00:04:53.401 "medium_priority_weight": 0, 00:04:53.401 "nvme_adminq_poll_period_us": 10000, 00:04:53.401 "nvme_error_stat": false, 00:04:53.401 "nvme_ioq_poll_period_us": 0, 00:04:53.401 "rdma_cm_event_timeout_ms": 0, 00:04:53.401 "rdma_max_cq_size": 0, 00:04:53.401 "rdma_srq_size": 0, 00:04:53.401 "reconnect_delay_sec": 0, 00:04:53.401 "timeout_admin_us": 0, 00:04:53.401 "timeout_us": 0, 00:04:53.401 "transport_ack_timeout": 0, 00:04:53.401 "transport_retry_count": 4, 00:04:53.401 "transport_tos": 0 00:04:53.401 } 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "method": "bdev_nvme_set_hotplug", 00:04:53.401 "params": { 00:04:53.401 "enable": false, 00:04:53.401 "period_us": 100000 00:04:53.401 } 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "method": "bdev_wait_for_examine" 00:04:53.401 } 00:04:53.401 ] 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "scsi", 00:04:53.401 "config": null 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "scheduler", 00:04:53.401 "config": [ 00:04:53.401 { 00:04:53.401 "method": "framework_set_scheduler", 00:04:53.401 "params": { 00:04:53.401 "name": "static" 00:04:53.401 } 00:04:53.401 } 00:04:53.401 ] 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "vhost_scsi", 00:04:53.401 "config": [] 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "vhost_blk", 00:04:53.401 "config": [] 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "ublk", 00:04:53.401 "config": [] 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "nbd", 00:04:53.401 "config": [] 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "subsystem": "nvmf", 00:04:53.401 "config": [ 00:04:53.401 { 00:04:53.401 "method": "nvmf_set_config", 00:04:53.401 "params": { 00:04:53.401 "admin_cmd_passthru": { 00:04:53.401 "identify_ctrlr": false 00:04:53.401 }, 00:04:53.401 "discovery_filter": "match_any" 00:04:53.401 } 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "method": "nvmf_set_max_subsystems", 00:04:53.401 "params": { 00:04:53.401 "max_subsystems": 1024 00:04:53.401 } 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "method": "nvmf_set_crdt", 00:04:53.401 "params": { 00:04:53.401 "crdt1": 0, 00:04:53.401 "crdt2": 0, 00:04:53.401 "crdt3": 0 00:04:53.401 } 00:04:53.401 }, 00:04:53.401 { 00:04:53.401 "method": "nvmf_create_transport", 00:04:53.401 "params": { 00:04:53.401 "abort_timeout_sec": 1, 00:04:53.401 "ack_timeout": 0, 00:04:53.401 "buf_cache_size": 4294967295, 00:04:53.401 "c2h_success": true, 00:04:53.401 "data_wr_pool_size": 0, 00:04:53.401 "dif_insert_or_strip": false, 00:04:53.401 "in_capsule_data_size": 4096, 00:04:53.401 "io_unit_size": 131072, 00:04:53.401 "max_aq_depth": 128, 00:04:53.401 "max_io_qpairs_per_ctrlr": 127, 00:04:53.401 "max_io_size": 131072, 00:04:53.401 "max_queue_depth": 128, 00:04:53.401 "num_shared_buffers": 511, 00:04:53.401 "sock_priority": 0, 00:04:53.401 "trtype": "TCP", 00:04:53.401 "zcopy": false 00:04:53.401 } 00:04:53.401 } 00:04:53.401 ] 00:04:53.402 }, 00:04:53.402 { 00:04:53.402 "subsystem": "iscsi", 00:04:53.402 "config": [ 00:04:53.402 { 00:04:53.402 "method": "iscsi_set_options", 00:04:53.402 "params": { 00:04:53.402 "allow_duplicated_isid": false, 00:04:53.402 "chap_group": 0, 00:04:53.402 "data_out_pool_size": 2048, 00:04:53.402 "default_time2retain": 20, 00:04:53.402 "default_time2wait": 2, 00:04:53.402 "disable_chap": false, 00:04:53.402 "error_recovery_level": 0, 00:04:53.402 "first_burst_length": 8192, 00:04:53.402 "immediate_data": true, 00:04:53.402 "immediate_data_pool_size": 16384, 00:04:53.402 "max_connections_per_session": 2, 00:04:53.402 "max_large_datain_per_connection": 64, 00:04:53.402 "max_queue_depth": 64, 00:04:53.402 "max_r2t_per_connection": 4, 00:04:53.402 "max_sessions": 128, 00:04:53.402 "mutual_chap": false, 00:04:53.402 "node_base": "iqn.2016-06.io.spdk", 00:04:53.402 "nop_in_interval": 30, 00:04:53.402 "nop_timeout": 60, 00:04:53.402 "pdu_pool_size": 36864, 00:04:53.402 "require_chap": false 00:04:53.402 } 00:04:53.402 } 00:04:53.402 ] 00:04:53.402 } 00:04:53.402 ] 00:04:53.402 } 00:04:53.402 12:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:53.402 12:49:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61108 00:04:53.402 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61108 ']' 00:04:53.402 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61108 00:04:53.402 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:53.402 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.661 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61108 00:04:53.661 killing process with pid 61108 00:04:53.661 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.661 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.661 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61108' 00:04:53.661 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61108 00:04:53.661 12:49:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61108 00:04:53.919 12:49:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61153 00:04:53.919 12:49:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.919 12:49:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61153 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61153 ']' 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61153 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61153 00:04:59.185 killing process with pid 61153 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61153' 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61153 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61153 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:59.185 00:04:59.185 real 0m6.919s 00:04:59.185 user 0m6.918s 00:04:59.185 sys 0m0.461s 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.185 ************************************ 00:04:59.185 END TEST skip_rpc_with_json 00:04:59.185 ************************************ 00:04:59.185 12:49:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.185 12:49:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:59.185 12:49:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.185 12:49:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.185 12:49:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.185 ************************************ 00:04:59.185 START TEST skip_rpc_with_delay 00:04:59.185 ************************************ 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.185 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.186 [2024-07-15 12:49:11.560332] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:59.186 [2024-07-15 12:49:11.560544] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:59.186 00:04:59.186 real 0m0.077s 00:04:59.186 user 0m0.052s 00:04:59.186 sys 0m0.024s 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.186 ************************************ 00:04:59.186 END TEST skip_rpc_with_delay 00:04:59.186 ************************************ 00:04:59.186 12:49:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:59.186 12:49:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:59.186 12:49:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:59.186 12:49:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:59.186 12:49:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:59.186 12:49:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.186 12:49:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.186 12:49:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.186 ************************************ 00:04:59.186 START TEST exit_on_failed_rpc_init 00:04:59.186 ************************************ 00:04:59.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61257 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61257 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61257 ']' 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.186 12:49:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.445 [2024-07-15 12:49:11.709752] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:04:59.445 [2024-07-15 12:49:11.710129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61257 ] 00:04:59.445 [2024-07-15 12:49:11.858188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.704 [2024-07-15 12:49:11.950120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.272 12:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.531 [2024-07-15 12:49:12.783543] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:00.531 [2024-07-15 12:49:12.783647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61287 ] 00:05:00.531 [2024-07-15 12:49:12.926401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.789 [2024-07-15 12:49:13.006070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.789 [2024-07-15 12:49:13.006157] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:00.789 [2024-07-15 12:49:13.006172] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:00.789 [2024-07-15 12:49:13.006180] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61257 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61257 ']' 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61257 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61257 00:05:00.789 killing process with pid 61257 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61257' 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61257 00:05:00.789 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61257 00:05:01.047 00:05:01.047 real 0m1.748s 00:05:01.047 user 0m2.177s 00:05:01.047 sys 0m0.325s 00:05:01.047 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.047 12:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.047 ************************************ 00:05:01.047 END TEST exit_on_failed_rpc_init 00:05:01.047 ************************************ 00:05:01.047 12:49:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.047 12:49:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.047 ************************************ 00:05:01.047 END TEST skip_rpc 00:05:01.047 ************************************ 00:05:01.047 00:05:01.047 real 0m14.298s 00:05:01.047 user 0m14.252s 00:05:01.047 sys 0m1.139s 00:05:01.047 12:49:13 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.047 12:49:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.047 12:49:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.047 12:49:13 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.047 12:49:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.047 12:49:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.047 12:49:13 -- common/autotest_common.sh@10 -- # set +x 00:05:01.047 ************************************ 00:05:01.047 START TEST rpc_client 00:05:01.047 ************************************ 00:05:01.047 12:49:13 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.305 * Looking for test storage... 00:05:01.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:01.305 12:49:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:01.305 OK 00:05:01.305 12:49:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:01.305 00:05:01.305 real 0m0.085s 00:05:01.305 user 0m0.044s 00:05:01.305 sys 0m0.047s 00:05:01.305 ************************************ 00:05:01.305 END TEST rpc_client 00:05:01.305 ************************************ 00:05:01.305 12:49:13 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.305 12:49:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:01.305 12:49:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.305 12:49:13 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:01.305 12:49:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.305 12:49:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.305 12:49:13 -- common/autotest_common.sh@10 -- # set +x 00:05:01.305 ************************************ 00:05:01.305 START TEST json_config 00:05:01.305 ************************************ 00:05:01.305 12:49:13 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:01.305 12:49:13 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.305 12:49:13 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.305 12:49:13 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.305 12:49:13 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.305 12:49:13 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.305 12:49:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.305 12:49:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.305 12:49:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.306 12:49:13 json_config -- paths/export.sh@5 -- # export PATH 00:05:01.306 12:49:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@51 -- # : 0 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.306 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.306 12:49:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:01.306 INFO: JSON configuration test init 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 12:49:13 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:01.306 12:49:13 json_config -- json_config/common.sh@9 -- # local app=target 00:05:01.306 12:49:13 json_config -- json_config/common.sh@10 -- # shift 00:05:01.306 12:49:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.306 Waiting for target to run... 00:05:01.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.306 12:49:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.306 12:49:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.306 12:49:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.306 12:49:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.306 12:49:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61405 00:05:01.306 12:49:13 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:01.306 12:49:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.306 12:49:13 json_config -- json_config/common.sh@25 -- # waitforlisten 61405 /var/tmp/spdk_tgt.sock 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@829 -- # '[' -z 61405 ']' 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.306 12:49:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 [2024-07-15 12:49:13.766531] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:01.306 [2024-07-15 12:49:13.766948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61405 ] 00:05:01.871 [2024-07-15 12:49:14.072171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.871 [2024-07-15 12:49:14.119560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.455 12:49:14 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.455 12:49:14 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:02.455 00:05:02.455 12:49:14 json_config -- json_config/common.sh@26 -- # echo '' 00:05:02.455 12:49:14 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:02.455 12:49:14 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:02.455 12:49:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.455 12:49:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.455 12:49:14 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:02.455 12:49:14 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:02.455 12:49:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.455 12:49:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.455 12:49:14 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:02.455 12:49:14 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:02.455 12:49:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:03.020 12:49:15 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:03.020 12:49:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:03.020 12:49:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.021 12:49:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.021 12:49:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:03.021 12:49:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:03.021 12:49:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:03.021 12:49:15 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:03.021 12:49:15 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:03.021 12:49:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:03.278 12:49:15 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:03.278 12:49:15 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:03.278 12:49:15 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:03.278 12:49:15 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:03.278 12:49:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.278 12:49:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.278 12:49:15 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:03.278 12:49:15 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:03.279 12:49:15 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:03.279 12:49:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:03.279 12:49:15 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:03.279 12:49:15 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:03.279 12:49:15 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:03.279 12:49:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.279 12:49:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.279 12:49:15 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:03.279 12:49:15 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:03.279 12:49:15 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:03.279 12:49:15 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.279 12:49:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.850 MallocForNvmf0 00:05:03.850 12:49:16 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.850 12:49:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.850 MallocForNvmf1 00:05:03.850 12:49:16 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.850 12:49:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:04.125 [2024-07-15 12:49:16.506645] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.125 12:49:16 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:04.125 12:49:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:04.382 12:49:16 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.382 12:49:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.638 12:49:17 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.638 12:49:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.894 12:49:17 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.894 12:49:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.151 [2024-07-15 12:49:17.559162] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:05.151 12:49:17 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:05.151 12:49:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.151 12:49:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.151 12:49:17 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:05.151 12:49:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.151 12:49:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.408 12:49:17 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:05.408 12:49:17 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.408 12:49:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.666 MallocBdevForConfigChangeCheck 00:05:05.666 12:49:17 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:05.666 12:49:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.666 12:49:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.666 12:49:17 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:05.666 12:49:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.924 INFO: shutting down applications... 00:05:05.924 12:49:18 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:05.924 12:49:18 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:05.924 12:49:18 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:05.924 12:49:18 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:05.924 12:49:18 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:06.181 Calling clear_iscsi_subsystem 00:05:06.181 Calling clear_nvmf_subsystem 00:05:06.181 Calling clear_nbd_subsystem 00:05:06.181 Calling clear_ublk_subsystem 00:05:06.181 Calling clear_vhost_blk_subsystem 00:05:06.181 Calling clear_vhost_scsi_subsystem 00:05:06.181 Calling clear_bdev_subsystem 00:05:06.439 12:49:18 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:06.439 12:49:18 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:06.439 12:49:18 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:06.439 12:49:18 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.439 12:49:18 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:06.439 12:49:18 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:06.697 12:49:19 json_config -- json_config/json_config.sh@345 -- # break 00:05:06.697 12:49:19 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:06.697 12:49:19 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:06.697 12:49:19 json_config -- json_config/common.sh@31 -- # local app=target 00:05:06.697 12:49:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.697 12:49:19 json_config -- json_config/common.sh@35 -- # [[ -n 61405 ]] 00:05:06.697 12:49:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61405 00:05:06.697 12:49:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.697 12:49:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.697 12:49:19 json_config -- json_config/common.sh@41 -- # kill -0 61405 00:05:06.697 12:49:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.263 12:49:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.263 12:49:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.263 12:49:19 json_config -- json_config/common.sh@41 -- # kill -0 61405 00:05:07.263 12:49:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.263 12:49:19 json_config -- json_config/common.sh@43 -- # break 00:05:07.263 12:49:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.263 12:49:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.263 SPDK target shutdown done 00:05:07.263 INFO: relaunching applications... 00:05:07.263 12:49:19 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:07.263 12:49:19 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.263 12:49:19 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.263 12:49:19 json_config -- json_config/common.sh@10 -- # shift 00:05:07.263 12:49:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.263 12:49:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.263 12:49:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.263 12:49:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.263 12:49:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.263 12:49:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61685 00:05:07.263 12:49:19 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.263 12:49:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.263 Waiting for target to run... 00:05:07.263 12:49:19 json_config -- json_config/common.sh@25 -- # waitforlisten 61685 /var/tmp/spdk_tgt.sock 00:05:07.263 12:49:19 json_config -- common/autotest_common.sh@829 -- # '[' -z 61685 ']' 00:05:07.263 12:49:19 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.263 12:49:19 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.263 12:49:19 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.263 12:49:19 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.263 12:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.263 [2024-07-15 12:49:19.691934] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:07.263 [2024-07-15 12:49:19.692098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61685 ] 00:05:07.830 [2024-07-15 12:49:19.988883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.830 [2024-07-15 12:49:20.059687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.089 [2024-07-15 12:49:20.374488] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.089 [2024-07-15 12:49:20.406588] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:08.347 12:49:20 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.347 12:49:20 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:08.347 00:05:08.347 12:49:20 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.347 12:49:20 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:08.347 12:49:20 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:08.347 INFO: Checking if target configuration is the same... 00:05:08.347 12:49:20 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:08.347 12:49:20 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:08.347 12:49:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.347 + '[' 2 -ne 2 ']' 00:05:08.347 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:08.347 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:08.347 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:08.347 +++ basename /dev/fd/62 00:05:08.347 ++ mktemp /tmp/62.XXX 00:05:08.347 + tmp_file_1=/tmp/62.WqK 00:05:08.347 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:08.347 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:08.347 + tmp_file_2=/tmp/spdk_tgt_config.json.iwB 00:05:08.347 + ret=0 00:05:08.347 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:08.913 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:08.913 + diff -u /tmp/62.WqK /tmp/spdk_tgt_config.json.iwB 00:05:08.913 + echo 'INFO: JSON config files are the same' 00:05:08.913 INFO: JSON config files are the same 00:05:08.913 + rm /tmp/62.WqK /tmp/spdk_tgt_config.json.iwB 00:05:08.913 + exit 0 00:05:08.913 12:49:21 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:08.913 INFO: changing configuration and checking if this can be detected... 00:05:08.913 12:49:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:08.913 12:49:21 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:08.913 12:49:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.171 12:49:21 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.171 12:49:21 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:09.171 12:49:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.171 + '[' 2 -ne 2 ']' 00:05:09.171 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:09.171 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:09.171 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:09.171 +++ basename /dev/fd/62 00:05:09.171 ++ mktemp /tmp/62.XXX 00:05:09.171 + tmp_file_1=/tmp/62.0HX 00:05:09.171 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.171 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.171 + tmp_file_2=/tmp/spdk_tgt_config.json.H0U 00:05:09.171 + ret=0 00:05:09.171 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:09.737 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:09.737 + diff -u /tmp/62.0HX /tmp/spdk_tgt_config.json.H0U 00:05:09.737 + ret=1 00:05:09.737 + echo '=== Start of file: /tmp/62.0HX ===' 00:05:09.737 + cat /tmp/62.0HX 00:05:09.737 + echo '=== End of file: /tmp/62.0HX ===' 00:05:09.737 + echo '' 00:05:09.737 + echo '=== Start of file: /tmp/spdk_tgt_config.json.H0U ===' 00:05:09.737 + cat /tmp/spdk_tgt_config.json.H0U 00:05:09.737 + echo '=== End of file: /tmp/spdk_tgt_config.json.H0U ===' 00:05:09.737 + echo '' 00:05:09.737 + rm /tmp/62.0HX /tmp/spdk_tgt_config.json.H0U 00:05:09.737 + exit 1 00:05:09.737 INFO: configuration change detected. 00:05:09.737 12:49:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:09.737 12:49:21 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:09.737 12:49:21 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:09.737 12:49:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.737 12:49:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.737 12:49:21 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:09.737 12:49:21 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:09.737 12:49:21 json_config -- json_config/json_config.sh@317 -- # [[ -n 61685 ]] 00:05:09.737 12:49:21 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:09.738 12:49:22 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.738 12:49:22 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:09.738 12:49:22 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:09.738 12:49:22 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:09.738 12:49:22 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:09.738 12:49:22 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:09.738 12:49:22 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.738 12:49:22 json_config -- json_config/json_config.sh@323 -- # killprocess 61685 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@948 -- # '[' -z 61685 ']' 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@952 -- # kill -0 61685 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@953 -- # uname 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61685 00:05:09.738 killing process with pid 61685 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61685' 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@967 -- # kill 61685 00:05:09.738 12:49:22 json_config -- common/autotest_common.sh@972 -- # wait 61685 00:05:09.996 12:49:22 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.996 12:49:22 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:09.997 12:49:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.997 12:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.997 INFO: Success 00:05:09.997 12:49:22 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:09.997 12:49:22 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:09.997 ************************************ 00:05:09.997 END TEST json_config 00:05:09.997 ************************************ 00:05:09.997 00:05:09.997 real 0m8.696s 00:05:09.997 user 0m12.927s 00:05:09.997 sys 0m1.549s 00:05:09.997 12:49:22 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.997 12:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.997 12:49:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.997 12:49:22 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:09.997 12:49:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.997 12:49:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.997 12:49:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.997 ************************************ 00:05:09.997 START TEST json_config_extra_key 00:05:09.997 ************************************ 00:05:09.997 12:49:22 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:09.997 12:49:22 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.997 12:49:22 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.997 12:49:22 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.997 12:49:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.997 12:49:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.997 12:49:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.997 12:49:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:09.997 12:49:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.997 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.997 12:49:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.997 INFO: launching applications... 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.997 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.997 Waiting for target to run... 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61856 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.997 12:49:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61856 /var/tmp/spdk_tgt.sock 00:05:09.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.997 12:49:22 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61856 ']' 00:05:09.997 12:49:22 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.997 12:49:22 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.997 12:49:22 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.997 12:49:22 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.997 12:49:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.255 [2024-07-15 12:49:22.466336] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:10.255 [2024-07-15 12:49:22.466469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61856 ] 00:05:10.514 [2024-07-15 12:49:22.783608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.514 [2024-07-15 12:49:22.829746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.094 00:05:11.094 INFO: shutting down applications... 00:05:11.094 12:49:23 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.094 12:49:23 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:11.094 12:49:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:11.094 12:49:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:11.094 12:49:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:11.094 12:49:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:11.094 12:49:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.094 12:49:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61856 ]] 00:05:11.094 12:49:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61856 00:05:11.094 12:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.094 12:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.094 12:49:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61856 00:05:11.094 12:49:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.658 12:49:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.658 12:49:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.658 12:49:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61856 00:05:11.658 12:49:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.658 SPDK target shutdown done 00:05:11.658 Success 00:05:11.658 12:49:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:11.658 12:49:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.658 12:49:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.658 12:49:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.658 00:05:11.658 real 0m1.698s 00:05:11.658 user 0m1.630s 00:05:11.658 sys 0m0.333s 00:05:11.658 ************************************ 00:05:11.658 END TEST json_config_extra_key 00:05:11.658 ************************************ 00:05:11.658 12:49:24 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.658 12:49:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.658 12:49:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.658 12:49:24 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.658 12:49:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.658 12:49:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.658 12:49:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.658 ************************************ 00:05:11.658 START TEST alias_rpc 00:05:11.658 ************************************ 00:05:11.658 12:49:24 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.916 * Looking for test storage... 00:05:11.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:11.916 12:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.916 12:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61932 00:05:11.916 12:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.916 12:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61932 00:05:11.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.916 12:49:24 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61932 ']' 00:05:11.916 12:49:24 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.916 12:49:24 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.916 12:49:24 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.916 12:49:24 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.916 12:49:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.916 [2024-07-15 12:49:24.219290] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:11.916 [2024-07-15 12:49:24.219662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61932 ] 00:05:11.916 [2024-07-15 12:49:24.357142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.174 [2024-07-15 12:49:24.446256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.174 12:49:24 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.174 12:49:24 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:12.174 12:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:12.739 12:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61932 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61932 ']' 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61932 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61932 00:05:12.739 killing process with pid 61932 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61932' 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@967 -- # kill 61932 00:05:12.739 12:49:24 alias_rpc -- common/autotest_common.sh@972 -- # wait 61932 00:05:12.997 ************************************ 00:05:12.997 END TEST alias_rpc 00:05:12.997 ************************************ 00:05:12.997 00:05:12.997 real 0m1.158s 00:05:12.997 user 0m1.399s 00:05:12.997 sys 0m0.302s 00:05:12.997 12:49:25 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.997 12:49:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.997 12:49:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.997 12:49:25 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:12.997 12:49:25 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.997 12:49:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.997 12:49:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.997 12:49:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.997 ************************************ 00:05:12.997 START TEST dpdk_mem_utility 00:05:12.997 ************************************ 00:05:12.997 12:49:25 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.997 * Looking for test storage... 00:05:12.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:12.997 12:49:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.997 12:49:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.997 12:49:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62011 00:05:12.997 12:49:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62011 00:05:12.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.997 12:49:25 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 62011 ']' 00:05:12.997 12:49:25 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.997 12:49:25 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.997 12:49:25 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.997 12:49:25 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.997 12:49:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.997 [2024-07-15 12:49:25.410832] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:12.997 [2024-07-15 12:49:25.410959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62011 ] 00:05:13.255 [2024-07-15 12:49:25.549718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.255 [2024-07-15 12:49:25.619690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.190 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.190 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:14.190 12:49:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:14.190 12:49:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:14.190 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.190 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.190 { 00:05:14.190 "filename": "/tmp/spdk_mem_dump.txt" 00:05:14.190 } 00:05:14.191 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.191 12:49:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:14.191 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:14.191 1 heaps totaling size 814.000000 MiB 00:05:14.191 size: 814.000000 MiB heap id: 0 00:05:14.191 end heaps---------- 00:05:14.191 8 mempools totaling size 598.116089 MiB 00:05:14.191 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:14.191 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:14.191 size: 84.521057 MiB name: bdev_io_62011 00:05:14.191 size: 51.011292 MiB name: evtpool_62011 00:05:14.191 size: 50.003479 MiB name: msgpool_62011 00:05:14.191 size: 21.763794 MiB name: PDU_Pool 00:05:14.191 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:14.191 size: 0.026123 MiB name: Session_Pool 00:05:14.191 end mempools------- 00:05:14.191 6 memzones totaling size 4.142822 MiB 00:05:14.191 size: 1.000366 MiB name: RG_ring_0_62011 00:05:14.191 size: 1.000366 MiB name: RG_ring_1_62011 00:05:14.191 size: 1.000366 MiB name: RG_ring_4_62011 00:05:14.191 size: 1.000366 MiB name: RG_ring_5_62011 00:05:14.191 size: 0.125366 MiB name: RG_ring_2_62011 00:05:14.191 size: 0.015991 MiB name: RG_ring_3_62011 00:05:14.191 end memzones------- 00:05:14.191 12:49:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:14.191 heap id: 0 total size: 814.000000 MiB number of busy elements: 242 number of free elements: 15 00:05:14.191 list of free elements. size: 12.482544 MiB 00:05:14.191 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:14.191 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:14.191 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:14.191 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:14.191 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:14.191 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:14.191 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:14.191 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:14.191 element at address: 0x200000200000 with size: 0.836853 MiB 00:05:14.191 element at address: 0x20001aa00000 with size: 0.570251 MiB 00:05:14.191 element at address: 0x20000b200000 with size: 0.489258 MiB 00:05:14.191 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:14.191 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:14.191 element at address: 0x200027e00000 with size: 0.397766 MiB 00:05:14.191 element at address: 0x200003a00000 with size: 0.350769 MiB 00:05:14.191 list of standard malloc elements. size: 199.254883 MiB 00:05:14.191 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:14.191 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:14.191 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:14.191 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:14.191 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:14.191 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:14.191 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:14.191 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:14.191 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:14.191 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:14.191 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:14.192 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e65d40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6ca00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:14.192 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:14.192 list of memzone associated elements. size: 602.262573 MiB 00:05:14.192 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:14.192 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:14.192 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:14.192 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:14.192 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:14.192 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62011_0 00:05:14.192 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:14.192 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62011_0 00:05:14.192 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:14.192 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62011_0 00:05:14.192 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:14.192 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:14.192 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:14.192 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:14.192 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:14.192 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62011 00:05:14.192 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:14.192 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62011 00:05:14.192 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:14.192 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62011 00:05:14.192 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:14.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:14.192 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:14.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:14.192 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:14.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:14.192 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:14.192 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:14.192 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:14.192 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62011 00:05:14.192 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:14.192 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62011 00:05:14.192 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:14.192 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62011 00:05:14.192 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:14.192 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62011 00:05:14.192 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:14.192 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62011 00:05:14.192 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:14.192 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:14.192 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:14.192 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:14.193 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:14.193 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:14.193 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:14.193 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62011 00:05:14.193 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:14.193 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:14.193 element at address: 0x200027e65ec0 with size: 0.023743 MiB 00:05:14.193 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:14.193 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:14.193 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62011 00:05:14.193 element at address: 0x200027e6c000 with size: 0.002441 MiB 00:05:14.193 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:14.193 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:14.193 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62011 00:05:14.193 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:14.193 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62011 00:05:14.193 element at address: 0x200027e6cac0 with size: 0.000305 MiB 00:05:14.193 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:14.193 12:49:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:14.193 12:49:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62011 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 62011 ']' 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 62011 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62011 00:05:14.193 killing process with pid 62011 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62011' 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 62011 00:05:14.193 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 62011 00:05:14.451 00:05:14.451 real 0m1.549s 00:05:14.451 user 0m1.817s 00:05:14.451 sys 0m0.319s 00:05:14.451 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.451 ************************************ 00:05:14.451 END TEST dpdk_mem_utility 00:05:14.451 12:49:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.451 ************************************ 00:05:14.451 12:49:26 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.451 12:49:26 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:14.451 12:49:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.451 12:49:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.451 12:49:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.451 ************************************ 00:05:14.451 START TEST event 00:05:14.451 ************************************ 00:05:14.451 12:49:26 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:14.709 * Looking for test storage... 00:05:14.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:14.709 12:49:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:14.709 12:49:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:14.709 12:49:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.709 12:49:26 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:14.709 12:49:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.709 12:49:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.709 ************************************ 00:05:14.709 START TEST event_perf 00:05:14.709 ************************************ 00:05:14.709 12:49:26 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.709 Running I/O for 1 seconds...[2024-07-15 12:49:26.952711] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:14.709 [2024-07-15 12:49:26.952864] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62100 ] 00:05:14.709 [2024-07-15 12:49:27.094696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.709 [2024-07-15 12:49:27.161381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.709 [2024-07-15 12:49:27.161478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.709 [2024-07-15 12:49:27.161523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.709 Running I/O for 1 seconds...[2024-07-15 12:49:27.161529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.083 00:05:16.083 lcore 0: 131204 00:05:16.083 lcore 1: 131207 00:05:16.083 lcore 2: 131196 00:05:16.083 lcore 3: 131200 00:05:16.083 done. 00:05:16.083 00:05:16.083 real 0m1.315s 00:05:16.083 user 0m4.121s 00:05:16.083 sys 0m0.054s 00:05:16.083 12:49:28 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.083 12:49:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.083 ************************************ 00:05:16.083 END TEST event_perf 00:05:16.083 ************************************ 00:05:16.083 12:49:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:16.083 12:49:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.083 12:49:28 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:16.083 12:49:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.083 12:49:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.083 ************************************ 00:05:16.083 START TEST event_reactor 00:05:16.083 ************************************ 00:05:16.083 12:49:28 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.083 [2024-07-15 12:49:28.302239] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:16.083 [2024-07-15 12:49:28.302328] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62133 ] 00:05:16.083 [2024-07-15 12:49:28.439639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.083 [2024-07-15 12:49:28.501174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.455 test_start 00:05:17.456 oneshot 00:05:17.456 tick 100 00:05:17.456 tick 100 00:05:17.456 tick 250 00:05:17.456 tick 100 00:05:17.456 tick 100 00:05:17.456 tick 250 00:05:17.456 tick 100 00:05:17.456 tick 500 00:05:17.456 tick 100 00:05:17.456 tick 100 00:05:17.456 tick 250 00:05:17.456 tick 100 00:05:17.456 tick 100 00:05:17.456 test_end 00:05:17.456 ************************************ 00:05:17.456 END TEST event_reactor 00:05:17.456 ************************************ 00:05:17.456 00:05:17.456 real 0m1.284s 00:05:17.456 user 0m1.134s 00:05:17.456 sys 0m0.043s 00:05:17.456 12:49:29 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.456 12:49:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:17.456 12:49:29 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.456 12:49:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.456 12:49:29 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:17.456 12:49:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.456 12:49:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.456 ************************************ 00:05:17.456 START TEST event_reactor_perf 00:05:17.456 ************************************ 00:05:17.456 12:49:29 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.456 [2024-07-15 12:49:29.630282] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:17.456 [2024-07-15 12:49:29.630414] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62169 ] 00:05:17.456 [2024-07-15 12:49:29.792910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.456 [2024-07-15 12:49:29.876235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.848 test_start 00:05:18.848 test_end 00:05:18.848 Performance: 342629 events per second 00:05:18.848 00:05:18.848 real 0m1.338s 00:05:18.848 user 0m1.180s 00:05:18.848 sys 0m0.049s 00:05:18.848 12:49:30 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.848 12:49:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.848 ************************************ 00:05:18.848 END TEST event_reactor_perf 00:05:18.848 ************************************ 00:05:18.848 12:49:30 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.848 12:49:30 event -- event/event.sh@49 -- # uname -s 00:05:18.848 12:49:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.848 12:49:30 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.848 12:49:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.848 12:49:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.848 12:49:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.848 ************************************ 00:05:18.848 START TEST event_scheduler 00:05:18.848 ************************************ 00:05:18.848 12:49:30 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.848 * Looking for test storage... 00:05:18.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:18.848 12:49:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:18.848 12:49:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62230 00:05:18.848 12:49:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:18.848 12:49:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.848 12:49:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62230 00:05:18.848 12:49:31 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62230 ']' 00:05:18.848 12:49:31 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.848 12:49:31 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.848 12:49:31 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.848 12:49:31 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.848 12:49:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.848 [2024-07-15 12:49:31.198605] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:18.848 [2024-07-15 12:49:31.198798] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62230 ] 00:05:19.106 [2024-07-15 12:49:31.341528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.106 [2024-07-15 12:49:31.404046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.106 [2024-07-15 12:49:31.404132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.106 [2024-07-15 12:49:31.404091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.106 [2024-07-15 12:49:31.404135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.039 12:49:32 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.039 12:49:32 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:20.039 12:49:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:20.039 12:49:32 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.039 12:49:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.039 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.039 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.039 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.039 POWER: Cannot set governor of lcore 0 to performance 00:05:20.039 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.039 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.039 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.039 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.039 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:20.039 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:20.039 POWER: Unable to set Power Management Environment for lcore 0 00:05:20.039 [2024-07-15 12:49:32.359952] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:20.039 [2024-07-15 12:49:32.359992] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:20.039 [2024-07-15 12:49:32.360024] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:20.040 [2024-07-15 12:49:32.360092] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:20.040 [2024-07-15 12:49:32.360129] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:20.040 [2024-07-15 12:49:32.360158] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:20.040 12:49:32 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:20.040 12:49:32 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 [2024-07-15 12:49:32.418566] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.040 12:49:32 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.040 12:49:32 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.040 12:49:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 ************************************ 00:05:20.040 START TEST scheduler_create_thread 00:05:20.040 ************************************ 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 2 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 3 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 4 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 5 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 6 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 7 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 8 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.040 9 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.040 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.307 10 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.307 12:49:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.246 ************************************ 00:05:21.246 END TEST scheduler_create_thread 00:05:21.246 ************************************ 00:05:21.246 12:49:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.246 00:05:21.246 real 0m1.173s 00:05:21.246 user 0m0.012s 00:05:21.246 sys 0m0.006s 00:05:21.246 12:49:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.246 12:49:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:21.246 12:49:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.246 12:49:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62230 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62230 ']' 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62230 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62230 00:05:21.246 killing process with pid 62230 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62230' 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62230 00:05:21.246 12:49:33 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62230 00:05:21.811 [2024-07-15 12:49:34.080841] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:21.811 ************************************ 00:05:21.811 END TEST event_scheduler 00:05:21.811 ************************************ 00:05:21.811 00:05:21.811 real 0m3.253s 00:05:21.811 user 0m6.629s 00:05:21.811 sys 0m0.314s 00:05:21.811 12:49:34 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.811 12:49:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.811 12:49:34 event -- common/autotest_common.sh@1142 -- # return 0 00:05:21.811 12:49:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.069 12:49:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.069 12:49:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.069 12:49:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.069 12:49:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.069 ************************************ 00:05:22.069 START TEST app_repeat 00:05:22.069 ************************************ 00:05:22.069 12:49:34 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.069 Process app_repeat pid: 62331 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62331 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62331' 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.069 spdk_app_start Round 0 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.069 12:49:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62331 /var/tmp/spdk-nbd.sock 00:05:22.069 12:49:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62331 ']' 00:05:22.069 12:49:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.069 12:49:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.069 12:49:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.069 12:49:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.069 12:49:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.069 [2024-07-15 12:49:34.314533] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:22.069 [2024-07-15 12:49:34.314638] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62331 ] 00:05:22.069 [2024-07-15 12:49:34.459468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.327 [2024-07-15 12:49:34.543653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.327 [2024-07-15 12:49:34.543664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.327 12:49:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.327 12:49:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:22.327 12:49:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.583 Malloc0 00:05:22.583 12:49:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.147 Malloc1 00:05:23.147 12:49:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.147 12:49:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.404 /dev/nbd0 00:05:23.404 12:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.404 12:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.404 1+0 records in 00:05:23.404 1+0 records out 00:05:23.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029013 s, 14.1 MB/s 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.404 12:49:35 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:23.404 12:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.404 12:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.404 12:49:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.662 /dev/nbd1 00:05:23.662 12:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.662 12:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.662 12:49:35 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:23.662 12:49:35 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:23.662 12:49:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.662 12:49:35 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.662 12:49:35 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:23.662 12:49:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:23.662 12:49:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.662 12:49:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.662 12:49:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.662 1+0 records in 00:05:23.662 1+0 records out 00:05:23.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328598 s, 12.5 MB/s 00:05:23.662 12:49:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.662 12:49:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:23.662 12:49:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.663 12:49:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.663 12:49:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:23.663 12:49:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.663 12:49:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.663 12:49:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.663 12:49:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.663 12:49:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.920 { 00:05:23.920 "bdev_name": "Malloc0", 00:05:23.920 "nbd_device": "/dev/nbd0" 00:05:23.920 }, 00:05:23.920 { 00:05:23.920 "bdev_name": "Malloc1", 00:05:23.920 "nbd_device": "/dev/nbd1" 00:05:23.920 } 00:05:23.920 ]' 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.920 { 00:05:23.920 "bdev_name": "Malloc0", 00:05:23.920 "nbd_device": "/dev/nbd0" 00:05:23.920 }, 00:05:23.920 { 00:05:23.920 "bdev_name": "Malloc1", 00:05:23.920 "nbd_device": "/dev/nbd1" 00:05:23.920 } 00:05:23.920 ]' 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.920 /dev/nbd1' 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.920 /dev/nbd1' 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.920 256+0 records in 00:05:23.920 256+0 records out 00:05:23.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00781859 s, 134 MB/s 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.920 256+0 records in 00:05:23.920 256+0 records out 00:05:23.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025771 s, 40.7 MB/s 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.920 12:49:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.177 256+0 records in 00:05:24.177 256+0 records out 00:05:24.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310993 s, 33.7 MB/s 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.177 12:49:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.434 12:49:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.691 12:49:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.257 12:49:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.257 12:49:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.514 12:49:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.772 [2024-07-15 12:49:37.984031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.772 [2024-07-15 12:49:38.044210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.772 [2024-07-15 12:49:38.044222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.772 [2024-07-15 12:49:38.074579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.772 [2024-07-15 12:49:38.074649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.050 12:49:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.050 spdk_app_start Round 1 00:05:29.050 12:49:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:29.050 12:49:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62331 /var/tmp/spdk-nbd.sock 00:05:29.050 12:49:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62331 ']' 00:05:29.050 12:49:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.050 12:49:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.050 12:49:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.050 12:49:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.050 12:49:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.050 12:49:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.050 12:49:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:29.050 12:49:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.307 Malloc0 00:05:29.307 12:49:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.871 Malloc1 00:05:29.871 12:49:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.871 12:49:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.129 /dev/nbd0 00:05:30.129 12:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.129 12:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.129 1+0 records in 00:05:30.129 1+0 records out 00:05:30.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254593 s, 16.1 MB/s 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.129 12:49:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.129 12:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.129 12:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.129 12:49:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.387 /dev/nbd1 00:05:30.387 12:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.387 12:49:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.387 1+0 records in 00:05:30.387 1+0 records out 00:05:30.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392 s, 10.4 MB/s 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.387 12:49:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.387 12:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.387 12:49:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.387 12:49:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.387 12:49:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.387 12:49:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.645 { 00:05:30.645 "bdev_name": "Malloc0", 00:05:30.645 "nbd_device": "/dev/nbd0" 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "bdev_name": "Malloc1", 00:05:30.645 "nbd_device": "/dev/nbd1" 00:05:30.645 } 00:05:30.645 ]' 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.645 { 00:05:30.645 "bdev_name": "Malloc0", 00:05:30.645 "nbd_device": "/dev/nbd0" 00:05:30.645 }, 00:05:30.645 { 00:05:30.645 "bdev_name": "Malloc1", 00:05:30.645 "nbd_device": "/dev/nbd1" 00:05:30.645 } 00:05:30.645 ]' 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.645 /dev/nbd1' 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.645 /dev/nbd1' 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.645 256+0 records in 00:05:30.645 256+0 records out 00:05:30.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0077568 s, 135 MB/s 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.645 12:49:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.902 256+0 records in 00:05:30.902 256+0 records out 00:05:30.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282815 s, 37.1 MB/s 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.902 256+0 records in 00:05:30.902 256+0 records out 00:05:30.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295616 s, 35.5 MB/s 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.902 12:49:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.903 12:49:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.903 12:49:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.903 12:49:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.160 12:49:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.727 12:49:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.986 12:49:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.986 12:49:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.245 12:49:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.245 [2024-07-15 12:49:44.666610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.503 [2024-07-15 12:49:44.727476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.503 [2024-07-15 12:49:44.727481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.503 [2024-07-15 12:49:44.758265] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.503 [2024-07-15 12:49:44.758339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.116 12:49:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.116 spdk_app_start Round 2 00:05:35.116 12:49:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.116 12:49:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62331 /var/tmp/spdk-nbd.sock 00:05:35.116 12:49:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62331 ']' 00:05:35.116 12:49:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.116 12:49:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.116 12:49:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.116 12:49:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.116 12:49:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.681 12:49:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.681 12:49:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.681 12:49:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.937 Malloc0 00:05:35.937 12:49:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.502 Malloc1 00:05:36.502 12:49:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.502 12:49:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.502 /dev/nbd0 00:05:36.759 12:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.759 12:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.759 1+0 records in 00:05:36.759 1+0 records out 00:05:36.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329845 s, 12.4 MB/s 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.759 12:49:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.759 12:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.759 12:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.759 12:49:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.047 /dev/nbd1 00:05:37.047 12:49:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.048 12:49:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.048 1+0 records in 00:05:37.048 1+0 records out 00:05:37.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427936 s, 9.6 MB/s 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:37.048 12:49:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:37.048 12:49:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.048 12:49:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.048 12:49:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.048 12:49:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.048 12:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.305 { 00:05:37.305 "bdev_name": "Malloc0", 00:05:37.305 "nbd_device": "/dev/nbd0" 00:05:37.305 }, 00:05:37.305 { 00:05:37.305 "bdev_name": "Malloc1", 00:05:37.305 "nbd_device": "/dev/nbd1" 00:05:37.305 } 00:05:37.305 ]' 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.305 { 00:05:37.305 "bdev_name": "Malloc0", 00:05:37.305 "nbd_device": "/dev/nbd0" 00:05:37.305 }, 00:05:37.305 { 00:05:37.305 "bdev_name": "Malloc1", 00:05:37.305 "nbd_device": "/dev/nbd1" 00:05:37.305 } 00:05:37.305 ]' 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.305 /dev/nbd1' 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.305 /dev/nbd1' 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.305 12:49:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.306 256+0 records in 00:05:37.306 256+0 records out 00:05:37.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672887 s, 156 MB/s 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.306 256+0 records in 00:05:37.306 256+0 records out 00:05:37.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260182 s, 40.3 MB/s 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.306 256+0 records in 00:05:37.306 256+0 records out 00:05:37.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258438 s, 40.6 MB/s 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.306 12:49:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.869 12:49:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.125 12:49:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.382 12:49:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.382 12:49:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.946 12:49:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.946 [2024-07-15 12:49:51.332471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.946 [2024-07-15 12:49:51.391746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.946 [2024-07-15 12:49:51.391758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.204 [2024-07-15 12:49:51.421113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.204 [2024-07-15 12:49:51.421176] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.768 12:49:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62331 /var/tmp/spdk-nbd.sock 00:05:41.768 12:49:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62331 ']' 00:05:41.768 12:49:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.768 12:49:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.768 12:49:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.768 12:49:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.768 12:49:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:42.333 12:49:54 event.app_repeat -- event/event.sh@39 -- # killprocess 62331 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62331 ']' 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62331 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62331 00:05:42.333 killing process with pid 62331 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62331' 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62331 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62331 00:05:42.333 spdk_app_start is called in Round 0. 00:05:42.333 Shutdown signal received, stop current app iteration 00:05:42.333 Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 reinitialization... 00:05:42.333 spdk_app_start is called in Round 1. 00:05:42.333 Shutdown signal received, stop current app iteration 00:05:42.333 Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 reinitialization... 00:05:42.333 spdk_app_start is called in Round 2. 00:05:42.333 Shutdown signal received, stop current app iteration 00:05:42.333 Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 reinitialization... 00:05:42.333 spdk_app_start is called in Round 3. 00:05:42.333 Shutdown signal received, stop current app iteration 00:05:42.333 12:49:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.333 12:49:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:42.333 00:05:42.333 real 0m20.449s 00:05:42.333 user 0m47.295s 00:05:42.333 sys 0m3.147s 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.333 ************************************ 00:05:42.333 END TEST app_repeat 00:05:42.333 ************************************ 00:05:42.333 12:49:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.333 12:49:54 event -- common/autotest_common.sh@1142 -- # return 0 00:05:42.333 12:49:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.333 12:49:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.333 12:49:54 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.333 12:49:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.333 12:49:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.333 ************************************ 00:05:42.333 START TEST cpu_locks 00:05:42.333 ************************************ 00:05:42.333 12:49:54 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.591 * Looking for test storage... 00:05:42.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.591 12:49:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.591 12:49:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.591 12:49:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.591 12:49:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.591 12:49:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.591 12:49:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.591 12:49:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.591 ************************************ 00:05:42.591 START TEST default_locks 00:05:42.591 ************************************ 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62965 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62965 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62965 ']' 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.591 12:49:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.591 [2024-07-15 12:49:54.937293] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:42.591 [2024-07-15 12:49:54.937436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62965 ] 00:05:42.850 [2024-07-15 12:49:55.081845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.850 [2024-07-15 12:49:55.155815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.784 12:49:55 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.784 12:49:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:43.784 12:49:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62965 00:05:43.784 12:49:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62965 00:05:43.784 12:49:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62965 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62965 ']' 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62965 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62965 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.043 killing process with pid 62965 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62965' 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62965 00:05:44.043 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62965 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62965 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62965 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62965 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62965 ']' 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.300 ERROR: process (pid: 62965) is no longer running 00:05:44.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62965) - No such process 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.300 00:05:44.300 real 0m1.803s 00:05:44.300 user 0m2.045s 00:05:44.300 sys 0m0.506s 00:05:44.300 ************************************ 00:05:44.300 END TEST default_locks 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.300 12:49:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.300 ************************************ 00:05:44.300 12:49:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:44.300 12:49:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.300 12:49:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.300 12:49:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.300 12:49:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.300 ************************************ 00:05:44.300 START TEST default_locks_via_rpc 00:05:44.300 ************************************ 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63023 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63023 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63023 ']' 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.300 12:49:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.557 [2024-07-15 12:49:56.776236] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:44.557 [2024-07-15 12:49:56.776955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63023 ] 00:05:44.557 [2024-07-15 12:49:56.915621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.557 [2024-07-15 12:49:57.003462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63023 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63023 00:05:45.490 12:49:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63023 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 63023 ']' 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 63023 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63023 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.055 killing process with pid 63023 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63023' 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 63023 00:05:46.055 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 63023 00:05:46.312 00:05:46.312 real 0m1.907s 00:05:46.312 user 0m2.226s 00:05:46.312 sys 0m0.505s 00:05:46.312 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.312 12:49:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.312 ************************************ 00:05:46.312 END TEST default_locks_via_rpc 00:05:46.312 ************************************ 00:05:46.312 12:49:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.312 12:49:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:46.312 12:49:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.312 12:49:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.312 12:49:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.312 ************************************ 00:05:46.312 START TEST non_locking_app_on_locked_coremask 00:05:46.312 ************************************ 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63092 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63092 /var/tmp/spdk.sock 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63092 ']' 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.312 12:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.312 [2024-07-15 12:49:58.726415] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:46.312 [2024-07-15 12:49:58.726554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63092 ] 00:05:46.567 [2024-07-15 12:49:58.890083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.567 [2024-07-15 12:49:58.979921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63126 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63126 /var/tmp/spdk2.sock 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63126 ']' 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.498 12:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.755 [2024-07-15 12:49:59.975925] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:47.755 [2024-07-15 12:49:59.976053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63126 ] 00:05:47.755 [2024-07-15 12:50:00.126229] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.755 [2024-07-15 12:50:00.126302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.013 [2024-07-15 12:50:00.248118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.947 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.947 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:48.947 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63092 00:05:48.947 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.947 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63092 00:05:49.511 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63092 00:05:49.511 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63092 ']' 00:05:49.511 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63092 00:05:49.511 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.511 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.511 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63092 00:05:49.769 killing process with pid 63092 00:05:49.769 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.769 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.769 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63092' 00:05:49.769 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63092 00:05:49.769 12:50:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63092 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63126 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63126 ']' 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63126 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63126 00:05:50.335 killing process with pid 63126 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63126' 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63126 00:05:50.335 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63126 00:05:50.594 ************************************ 00:05:50.594 END TEST non_locking_app_on_locked_coremask 00:05:50.594 ************************************ 00:05:50.594 00:05:50.594 real 0m4.204s 00:05:50.594 user 0m5.176s 00:05:50.594 sys 0m1.003s 00:05:50.594 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.594 12:50:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.594 12:50:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:50.594 12:50:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:50.594 12:50:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.594 12:50:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.594 12:50:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.594 ************************************ 00:05:50.594 START TEST locking_app_on_unlocked_coremask 00:05:50.594 ************************************ 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63205 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63205 /var/tmp/spdk.sock 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63205 ']' 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.594 12:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.594 [2024-07-15 12:50:02.970855] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:50.594 [2024-07-15 12:50:02.971268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63205 ] 00:05:50.862 [2024-07-15 12:50:03.115452] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.862 [2024-07-15 12:50:03.115530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.862 [2024-07-15 12:50:03.177592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63233 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63233 /var/tmp/spdk2.sock 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63233 ']' 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.801 12:50:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.801 [2024-07-15 12:50:04.004523] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:51.801 [2024-07-15 12:50:04.005204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63233 ] 00:05:51.801 [2024-07-15 12:50:04.151736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.059 [2024-07-15 12:50:04.272045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.625 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.625 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:52.625 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63233 00:05:52.625 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.625 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63233 00:05:53.556 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63205 00:05:53.556 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63205 ']' 00:05:53.556 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63205 00:05:53.556 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.557 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.557 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63205 00:05:53.557 killing process with pid 63205 00:05:53.557 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.557 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.557 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63205' 00:05:53.557 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63205 00:05:53.557 12:50:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63205 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63233 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63233 ']' 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63233 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63233 00:05:54.128 killing process with pid 63233 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63233' 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63233 00:05:54.128 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63233 00:05:54.385 ************************************ 00:05:54.385 END TEST locking_app_on_unlocked_coremask 00:05:54.385 ************************************ 00:05:54.385 00:05:54.385 real 0m3.726s 00:05:54.385 user 0m4.413s 00:05:54.385 sys 0m0.931s 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.385 12:50:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.385 12:50:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:54.385 12:50:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.385 12:50:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.385 12:50:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.385 ************************************ 00:05:54.385 START TEST locking_app_on_locked_coremask 00:05:54.385 ************************************ 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63307 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63307 /var/tmp/spdk.sock 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63307 ']' 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.385 12:50:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.385 [2024-07-15 12:50:06.742314] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:54.385 [2024-07-15 12:50:06.742452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63307 ] 00:05:54.641 [2024-07-15 12:50:06.889052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.641 [2024-07-15 12:50:06.971987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63335 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63335 /var/tmp/spdk2.sock 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63335 /var/tmp/spdk2.sock 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:55.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63335 /var/tmp/spdk2.sock 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63335 ']' 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.574 12:50:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.574 [2024-07-15 12:50:07.839996] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:55.574 [2024-07-15 12:50:07.840135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63335 ] 00:05:55.574 [2024-07-15 12:50:07.989495] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63307 has claimed it. 00:05:55.574 [2024-07-15 12:50:07.989594] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.140 ERROR: process (pid: 63335) is no longer running 00:05:56.140 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63335) - No such process 00:05:56.140 12:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.140 12:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:56.140 12:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:56.140 12:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.140 12:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.140 12:50:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.140 12:50:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63307 00:05:56.140 12:50:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63307 00:05:56.140 12:50:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63307 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63307 ']' 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63307 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63307 00:05:56.707 killing process with pid 63307 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63307' 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63307 00:05:56.707 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63307 00:05:56.965 00:05:56.965 real 0m2.654s 00:05:56.965 user 0m3.285s 00:05:56.965 sys 0m0.575s 00:05:56.965 ************************************ 00:05:56.965 END TEST locking_app_on_locked_coremask 00:05:56.965 ************************************ 00:05:56.965 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.965 12:50:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.965 12:50:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.965 12:50:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.965 12:50:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.965 12:50:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.965 12:50:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.965 ************************************ 00:05:56.965 START TEST locking_overlapped_coremask 00:05:56.965 ************************************ 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63387 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63387 /var/tmp/spdk.sock 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63387 ']' 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.965 12:50:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.223 [2024-07-15 12:50:09.433451] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:57.223 [2024-07-15 12:50:09.433585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63387 ] 00:05:57.223 [2024-07-15 12:50:09.573123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.223 [2024-07-15 12:50:09.662413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.223 [2024-07-15 12:50:09.662526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.223 [2024-07-15 12:50:09.662535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63417 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63417 /var/tmp/spdk2.sock 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63417 /var/tmp/spdk2.sock 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63417 /var/tmp/spdk2.sock 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63417 ']' 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.156 12:50:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.156 [2024-07-15 12:50:10.613750] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:58.156 [2024-07-15 12:50:10.614325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63417 ] 00:05:58.414 [2024-07-15 12:50:10.758587] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63387 has claimed it. 00:05:58.414 [2024-07-15 12:50:10.758660] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.980 ERROR: process (pid: 63417) is no longer running 00:05:58.980 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63417) - No such process 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63387 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63387 ']' 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63387 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.980 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63387 00:05:59.238 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.238 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.238 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63387' 00:05:59.238 killing process with pid 63387 00:05:59.238 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63387 00:05:59.238 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63387 00:05:59.497 ************************************ 00:05:59.497 END TEST locking_overlapped_coremask 00:05:59.497 ************************************ 00:05:59.497 00:05:59.497 real 0m2.395s 00:05:59.497 user 0m6.956s 00:05:59.497 sys 0m0.358s 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.497 12:50:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.497 12:50:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.497 12:50:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.497 12:50:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.497 12:50:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.497 ************************************ 00:05:59.497 START TEST locking_overlapped_coremask_via_rpc 00:05:59.497 ************************************ 00:05:59.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63464 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63464 /var/tmp/spdk.sock 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63464 ']' 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.497 12:50:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.497 [2024-07-15 12:50:11.872975] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:05:59.497 [2024-07-15 12:50:11.873114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:05:59.759 [2024-07-15 12:50:12.037220] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.759 [2024-07-15 12:50:12.037308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.759 [2024-07-15 12:50:12.122588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.759 [2024-07-15 12:50:12.122671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.759 [2024-07-15 12:50:12.122678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63504 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63504 /var/tmp/spdk2.sock 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63504 ']' 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.720 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.720 [2024-07-15 12:50:13.158742] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:00.720 [2024-07-15 12:50:13.159150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63504 ] 00:06:00.990 [2024-07-15 12:50:13.309437] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.990 [2024-07-15 12:50:13.309500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.990 [2024-07-15 12:50:13.431654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.990 [2024-07-15 12:50:13.434828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.990 [2024-07-15 12:50:13.434831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.571 [2024-07-15 12:50:13.764954] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63464 has claimed it. 00:06:01.571 2024/07/15 12:50:13 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:01.571 request: 00:06:01.571 { 00:06:01.571 "method": "framework_enable_cpumask_locks", 00:06:01.571 "params": {} 00:06:01.571 } 00:06:01.571 Got JSON-RPC error response 00:06:01.571 GoRPCClient: error on JSON-RPC call 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63464 /var/tmp/spdk.sock 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63464 ']' 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.571 12:50:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.829 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.829 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.829 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63504 /var/tmp/spdk2.sock 00:06:01.829 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63504 ']' 00:06:01.829 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.829 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.829 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.829 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.829 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.085 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.085 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:02.085 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:02.085 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.085 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.085 ************************************ 00:06:02.085 END TEST locking_overlapped_coremask_via_rpc 00:06:02.085 ************************************ 00:06:02.086 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.086 00:06:02.086 real 0m2.677s 00:06:02.086 user 0m1.651s 00:06:02.086 sys 0m0.191s 00:06:02.086 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.086 12:50:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:02.086 12:50:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:02.086 12:50:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63464 ]] 00:06:02.086 12:50:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63464 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63464 ']' 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63464 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63464 00:06:02.086 killing process with pid 63464 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63464' 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63464 00:06:02.086 12:50:14 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63464 00:06:02.651 12:50:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63504 ]] 00:06:02.651 12:50:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63504 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63504 ']' 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63504 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63504 00:06:02.651 killing process with pid 63504 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63504' 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63504 00:06:02.651 12:50:14 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63504 00:06:02.651 12:50:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.651 Process with pid 63464 is not found 00:06:02.651 Process with pid 63504 is not found 00:06:02.651 12:50:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.652 12:50:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63464 ]] 00:06:02.652 12:50:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63464 00:06:02.652 12:50:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63464 ']' 00:06:02.652 12:50:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63464 00:06:02.652 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63464) - No such process 00:06:02.652 12:50:15 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63464 is not found' 00:06:02.652 12:50:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63504 ]] 00:06:02.652 12:50:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63504 00:06:02.652 12:50:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63504 ']' 00:06:02.652 12:50:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63504 00:06:02.652 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63504) - No such process 00:06:02.652 12:50:15 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63504 is not found' 00:06:02.652 12:50:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.909 00:06:02.909 real 0m20.341s 00:06:02.909 user 0m37.132s 00:06:02.909 sys 0m4.707s 00:06:02.909 12:50:15 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.909 12:50:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.909 ************************************ 00:06:02.909 END TEST cpu_locks 00:06:02.909 ************************************ 00:06:02.909 12:50:15 event -- common/autotest_common.sh@1142 -- # return 0 00:06:02.909 00:06:02.909 real 0m48.299s 00:06:02.909 user 1m37.604s 00:06:02.909 sys 0m8.506s 00:06:02.909 12:50:15 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.909 12:50:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.909 ************************************ 00:06:02.909 END TEST event 00:06:02.909 ************************************ 00:06:02.909 12:50:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.909 12:50:15 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.909 12:50:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.909 12:50:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.909 12:50:15 -- common/autotest_common.sh@10 -- # set +x 00:06:02.909 ************************************ 00:06:02.909 START TEST thread 00:06:02.909 ************************************ 00:06:02.909 12:50:15 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.909 * Looking for test storage... 00:06:02.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:02.909 12:50:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.909 12:50:15 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:02.909 12:50:15 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.909 12:50:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.909 ************************************ 00:06:02.909 START TEST thread_poller_perf 00:06:02.909 ************************************ 00:06:02.909 12:50:15 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.909 [2024-07-15 12:50:15.288096] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:02.909 [2024-07-15 12:50:15.288198] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63637 ] 00:06:03.166 [2024-07-15 12:50:15.418295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.166 [2024-07-15 12:50:15.506997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.166 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.536 ====================================== 00:06:04.536 busy:2210480269 (cyc) 00:06:04.536 total_run_count: 291000 00:06:04.536 tsc_hz: 2200000000 (cyc) 00:06:04.536 ====================================== 00:06:04.536 poller_cost: 7596 (cyc), 3452 (nsec) 00:06:04.536 00:06:04.536 real 0m1.320s 00:06:04.536 user 0m1.169s 00:06:04.536 sys 0m0.042s 00:06:04.536 12:50:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.536 12:50:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.536 ************************************ 00:06:04.536 END TEST thread_poller_perf 00:06:04.536 ************************************ 00:06:04.536 12:50:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:04.536 12:50:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.536 12:50:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:04.536 12:50:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.536 12:50:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.536 ************************************ 00:06:04.536 START TEST thread_poller_perf 00:06:04.536 ************************************ 00:06:04.536 12:50:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.536 [2024-07-15 12:50:16.649366] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:04.536 [2024-07-15 12:50:16.649988] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63667 ] 00:06:04.536 [2024-07-15 12:50:16.788127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.536 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.536 [2024-07-15 12:50:16.875548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.908 ====================================== 00:06:05.908 busy:2202933671 (cyc) 00:06:05.908 total_run_count: 3970000 00:06:05.908 tsc_hz: 2200000000 (cyc) 00:06:05.908 ====================================== 00:06:05.908 poller_cost: 554 (cyc), 251 (nsec) 00:06:05.908 ************************************ 00:06:05.908 END TEST thread_poller_perf 00:06:05.908 ************************************ 00:06:05.908 00:06:05.908 real 0m1.317s 00:06:05.908 user 0m1.164s 00:06:05.908 sys 0m0.044s 00:06:05.908 12:50:17 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.908 12:50:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.908 12:50:17 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:05.908 12:50:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.908 ************************************ 00:06:05.908 END TEST thread 00:06:05.908 ************************************ 00:06:05.908 00:06:05.908 real 0m2.793s 00:06:05.908 user 0m2.391s 00:06:05.908 sys 0m0.179s 00:06:05.908 12:50:17 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.908 12:50:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.908 12:50:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.908 12:50:18 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:05.908 12:50:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.908 12:50:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.908 12:50:18 -- common/autotest_common.sh@10 -- # set +x 00:06:05.908 ************************************ 00:06:05.908 START TEST accel 00:06:05.908 ************************************ 00:06:05.908 12:50:18 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:05.908 * Looking for test storage... 00:06:05.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:05.908 12:50:18 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:05.908 12:50:18 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:05.908 12:50:18 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.908 12:50:18 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63742 00:06:05.908 12:50:18 accel -- accel/accel.sh@63 -- # waitforlisten 63742 00:06:05.908 12:50:18 accel -- common/autotest_common.sh@829 -- # '[' -z 63742 ']' 00:06:05.908 12:50:18 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.908 12:50:18 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.908 12:50:18 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.908 12:50:18 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:05.908 12:50:18 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:05.908 12:50:18 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.908 12:50:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.908 12:50:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.908 12:50:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.908 12:50:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.908 12:50:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.908 12:50:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.908 12:50:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:05.908 12:50:18 accel -- accel/accel.sh@41 -- # jq -r . 00:06:05.908 [2024-07-15 12:50:18.156146] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:05.908 [2024-07-15 12:50:18.156234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63742 ] 00:06:05.908 [2024-07-15 12:50:18.284417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.908 [2024-07-15 12:50:18.346084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.167 12:50:18 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.167 12:50:18 accel -- common/autotest_common.sh@862 -- # return 0 00:06:06.167 12:50:18 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:06.167 12:50:18 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:06.167 12:50:18 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:06.167 12:50:18 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:06.167 12:50:18 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:06.167 12:50:18 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:06.167 12:50:18 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:06.167 12:50:18 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.167 12:50:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.167 12:50:18 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.167 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.167 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.167 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.167 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.167 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.167 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:06.168 12:50:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:06.168 12:50:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:06.168 12:50:18 accel -- accel/accel.sh@75 -- # killprocess 63742 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@948 -- # '[' -z 63742 ']' 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@952 -- # kill -0 63742 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@953 -- # uname 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63742 00:06:06.168 killing process with pid 63742 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63742' 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@967 -- # kill 63742 00:06:06.168 12:50:18 accel -- common/autotest_common.sh@972 -- # wait 63742 00:06:06.425 12:50:18 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:06.425 12:50:18 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:06.425 12:50:18 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:06.425 12:50:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.425 12:50:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.425 12:50:18 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:06.425 12:50:18 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:06.425 12:50:18 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:06.425 12:50:18 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.425 12:50:18 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.425 12:50:18 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.425 12:50:18 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.425 12:50:18 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.425 12:50:18 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:06.425 12:50:18 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:06.683 12:50:18 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.683 12:50:18 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:06.683 12:50:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.683 12:50:18 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:06.683 12:50:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:06.683 12:50:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.683 12:50:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.683 ************************************ 00:06:06.683 START TEST accel_missing_filename 00:06:06.683 ************************************ 00:06:06.683 12:50:18 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:06.683 12:50:18 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:06.683 12:50:18 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:06.683 12:50:18 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:06.683 12:50:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.683 12:50:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:06.683 12:50:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.683 12:50:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:06.683 12:50:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:06.683 12:50:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:06.683 12:50:18 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.683 12:50:18 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.683 12:50:18 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.683 12:50:18 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.683 12:50:18 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.683 12:50:18 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:06.683 12:50:18 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:06.683 [2024-07-15 12:50:18.954234] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:06.683 [2024-07-15 12:50:18.954323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63794 ] 00:06:06.683 [2024-07-15 12:50:19.093149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.940 [2024-07-15 12:50:19.167840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.940 [2024-07-15 12:50:19.199365] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.940 [2024-07-15 12:50:19.241040] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:06.940 A filename is required. 00:06:06.940 12:50:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:06.940 12:50:19 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.940 12:50:19 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:06.940 12:50:19 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:06.940 12:50:19 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:06.940 12:50:19 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.940 00:06:06.941 real 0m0.416s 00:06:06.941 user 0m0.279s 00:06:06.941 sys 0m0.074s 00:06:06.941 12:50:19 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.941 ************************************ 00:06:06.941 END TEST accel_missing_filename 00:06:06.941 ************************************ 00:06:06.941 12:50:19 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:06.941 12:50:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.941 12:50:19 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.941 12:50:19 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:06.941 12:50:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.941 12:50:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.941 ************************************ 00:06:06.941 START TEST accel_compress_verify 00:06:06.941 ************************************ 00:06:06.941 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.941 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:06.941 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.941 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:06.941 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.941 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:06.941 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.941 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.941 12:50:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:06.941 12:50:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:06.941 12:50:19 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.941 12:50:19 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.941 12:50:19 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.941 12:50:19 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.941 12:50:19 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.941 12:50:19 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:06.941 12:50:19 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:07.198 [2024-07-15 12:50:19.416812] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:07.198 [2024-07-15 12:50:19.416949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63824 ] 00:06:07.199 [2024-07-15 12:50:19.554438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.199 [2024-07-15 12:50:19.640843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.470 [2024-07-15 12:50:19.676972] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.470 [2024-07-15 12:50:19.723279] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:07.470 00:06:07.470 Compression does not support the verify option, aborting. 00:06:07.470 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:07.470 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.470 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:07.470 ************************************ 00:06:07.470 END TEST accel_compress_verify 00:06:07.470 ************************************ 00:06:07.470 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.470 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:07.470 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.470 00:06:07.470 real 0m0.412s 00:06:07.470 user 0m0.279s 00:06:07.470 sys 0m0.091s 00:06:07.470 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.470 12:50:19 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:07.470 12:50:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.470 12:50:19 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:07.470 12:50:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.470 12:50:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.470 12:50:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.470 ************************************ 00:06:07.470 START TEST accel_wrong_workload 00:06:07.470 ************************************ 00:06:07.470 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:07.470 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:07.470 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:07.470 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.470 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.470 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.470 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.470 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:07.470 12:50:19 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:07.470 12:50:19 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:07.470 12:50:19 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.470 12:50:19 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.470 12:50:19 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.470 12:50:19 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.470 12:50:19 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.470 12:50:19 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:07.470 12:50:19 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:07.470 Unsupported workload type: foobar 00:06:07.470 [2024-07-15 12:50:19.867909] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:07.470 accel_perf options: 00:06:07.470 [-h help message] 00:06:07.470 [-q queue depth per core] 00:06:07.471 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:07.471 [-T number of threads per core 00:06:07.471 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:07.471 [-t time in seconds] 00:06:07.471 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:07.471 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:07.471 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:07.471 [-l for compress/decompress workloads, name of uncompressed input file 00:06:07.471 [-S for crc32c workload, use this seed value (default 0) 00:06:07.471 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:07.471 [-f for fill workload, use this BYTE value (default 255) 00:06:07.471 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:07.471 [-y verify result if this switch is on] 00:06:07.471 [-a tasks to allocate per core (default: same value as -q)] 00:06:07.471 Can be used to spread operations across a wider range of memory. 00:06:07.471 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:07.471 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.471 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:07.471 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.471 00:06:07.471 real 0m0.031s 00:06:07.471 user 0m0.013s 00:06:07.471 sys 0m0.017s 00:06:07.471 ************************************ 00:06:07.471 END TEST accel_wrong_workload 00:06:07.471 ************************************ 00:06:07.471 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.471 12:50:19 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:07.471 12:50:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.471 12:50:19 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:07.471 12:50:19 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:07.471 12:50:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.471 12:50:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.471 ************************************ 00:06:07.471 START TEST accel_negative_buffers 00:06:07.471 ************************************ 00:06:07.471 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:07.471 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:07.471 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:07.471 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:07.471 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.471 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:07.471 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.471 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:07.471 12:50:19 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:07.471 12:50:19 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:07.471 12:50:19 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.471 12:50:19 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.471 12:50:19 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.471 12:50:19 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.471 12:50:19 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.471 12:50:19 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:07.471 12:50:19 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:07.728 -x option must be non-negative. 00:06:07.728 [2024-07-15 12:50:19.937719] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:07.728 accel_perf options: 00:06:07.728 [-h help message] 00:06:07.728 [-q queue depth per core] 00:06:07.728 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:07.728 [-T number of threads per core 00:06:07.728 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:07.728 [-t time in seconds] 00:06:07.728 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:07.728 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:07.728 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:07.728 [-l for compress/decompress workloads, name of uncompressed input file 00:06:07.728 [-S for crc32c workload, use this seed value (default 0) 00:06:07.728 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:07.728 [-f for fill workload, use this BYTE value (default 255) 00:06:07.729 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:07.729 [-y verify result if this switch is on] 00:06:07.729 [-a tasks to allocate per core (default: same value as -q)] 00:06:07.729 Can be used to spread operations across a wider range of memory. 00:06:07.729 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:07.729 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.729 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:07.729 ************************************ 00:06:07.729 END TEST accel_negative_buffers 00:06:07.729 ************************************ 00:06:07.729 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.729 00:06:07.729 real 0m0.031s 00:06:07.729 user 0m0.018s 00:06:07.729 sys 0m0.013s 00:06:07.729 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.729 12:50:19 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:07.729 12:50:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.729 12:50:19 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:07.729 12:50:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:07.729 12:50:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.729 12:50:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.729 ************************************ 00:06:07.729 START TEST accel_crc32c 00:06:07.729 ************************************ 00:06:07.729 12:50:19 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:07.729 12:50:19 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:07.729 [2024-07-15 12:50:20.006931] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:07.729 [2024-07-15 12:50:20.007072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63877 ] 00:06:07.729 [2024-07-15 12:50:20.145422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.986 [2024-07-15 12:50:20.204823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.986 12:50:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:08.915 12:50:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.915 00:06:08.915 real 0m1.375s 00:06:08.915 user 0m1.203s 00:06:08.915 sys 0m0.069s 00:06:08.915 12:50:21 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.915 12:50:21 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:08.915 ************************************ 00:06:08.915 END TEST accel_crc32c 00:06:08.915 ************************************ 00:06:09.173 12:50:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.173 12:50:21 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:09.173 12:50:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:09.173 12:50:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.173 12:50:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.173 ************************************ 00:06:09.173 START TEST accel_crc32c_C2 00:06:09.173 ************************************ 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.173 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:09.173 [2024-07-15 12:50:21.430307] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:09.173 [2024-07-15 12:50:21.430484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63918 ] 00:06:09.173 [2024-07-15 12:50:21.574863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.173 [2024-07-15 12:50:21.634200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.430 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.431 12:50:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.375 00:06:10.375 real 0m1.390s 00:06:10.375 user 0m1.199s 00:06:10.375 sys 0m0.094s 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.375 12:50:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:10.375 ************************************ 00:06:10.375 END TEST accel_crc32c_C2 00:06:10.375 ************************************ 00:06:10.375 12:50:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.375 12:50:22 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:10.375 12:50:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.375 12:50:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.375 12:50:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.375 ************************************ 00:06:10.375 START TEST accel_copy 00:06:10.375 ************************************ 00:06:10.375 12:50:22 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:10.375 12:50:22 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:10.632 [2024-07-15 12:50:22.858853] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:10.632 [2024-07-15 12:50:22.858997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63947 ] 00:06:10.632 [2024-07-15 12:50:22.998348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.633 [2024-07-15 12:50:23.071384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.914 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.915 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.916 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.916 12:50:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:10.916 12:50:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.916 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.916 12:50:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:12.295 12:50:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.295 00:06:12.295 real 0m1.516s 00:06:12.295 user 0m1.324s 00:06:12.295 sys 0m0.087s 00:06:12.295 12:50:24 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.295 12:50:24 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:12.295 ************************************ 00:06:12.295 END TEST accel_copy 00:06:12.295 ************************************ 00:06:12.295 12:50:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.295 12:50:24 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.295 12:50:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:12.295 12:50:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.295 12:50:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.295 ************************************ 00:06:12.295 START TEST accel_fill 00:06:12.295 ************************************ 00:06:12.295 12:50:24 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:12.295 [2024-07-15 12:50:24.425513] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:12.295 [2024-07-15 12:50:24.425673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63982 ] 00:06:12.295 [2024-07-15 12:50:24.572328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.295 [2024-07-15 12:50:24.636155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:12.295 12:50:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.688 12:50:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:13.689 12:50:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.689 12:50:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:13.689 12:50:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.689 00:06:13.689 real 0m1.390s 00:06:13.689 user 0m1.211s 00:06:13.689 sys 0m0.081s 00:06:13.689 12:50:25 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.689 ************************************ 00:06:13.689 END TEST accel_fill 00:06:13.689 ************************************ 00:06:13.689 12:50:25 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:13.689 12:50:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.689 12:50:25 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:13.689 12:50:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.689 12:50:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.689 12:50:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.689 ************************************ 00:06:13.689 START TEST accel_copy_crc32c 00:06:13.689 ************************************ 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:13.689 12:50:25 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:13.689 [2024-07-15 12:50:25.857062] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:13.689 [2024-07-15 12:50:25.857193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64016 ] 00:06:13.689 [2024-07-15 12:50:25.994424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.689 [2024-07-15 12:50:26.081021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.689 12:50:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.081 00:06:15.081 real 0m1.405s 00:06:15.081 user 0m1.224s 00:06:15.081 sys 0m0.083s 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.081 12:50:27 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:15.081 ************************************ 00:06:15.081 END TEST accel_copy_crc32c 00:06:15.081 ************************************ 00:06:15.081 12:50:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.081 12:50:27 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:15.081 12:50:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:15.081 12:50:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.081 12:50:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.081 ************************************ 00:06:15.081 START TEST accel_copy_crc32c_C2 00:06:15.081 ************************************ 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.081 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:15.081 [2024-07-15 12:50:27.299064] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:15.081 [2024-07-15 12:50:27.299157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64051 ] 00:06:15.081 [2024-07-15 12:50:27.429303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.081 [2024-07-15 12:50:27.518705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.340 12:50:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.274 00:06:16.274 real 0m1.401s 00:06:16.274 user 0m1.218s 00:06:16.274 sys 0m0.084s 00:06:16.274 ************************************ 00:06:16.274 END TEST accel_copy_crc32c_C2 00:06:16.274 ************************************ 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.274 12:50:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:16.274 12:50:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.274 12:50:28 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:16.274 12:50:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.274 12:50:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.274 12:50:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.274 ************************************ 00:06:16.274 START TEST accel_dualcast 00:06:16.274 ************************************ 00:06:16.274 12:50:28 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:16.274 12:50:28 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:16.532 [2024-07-15 12:50:28.741257] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:16.532 [2024-07-15 12:50:28.741393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64087 ] 00:06:16.532 [2024-07-15 12:50:28.878857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.532 [2024-07-15 12:50:28.953690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.532 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:16.789 12:50:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:16.789 12:50:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:16.789 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:16.789 12:50:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.726 ************************************ 00:06:17.726 END TEST accel_dualcast 00:06:17.726 ************************************ 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:17.726 12:50:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.726 00:06:17.726 real 0m1.391s 00:06:17.726 user 0m1.216s 00:06:17.726 sys 0m0.077s 00:06:17.726 12:50:30 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.726 12:50:30 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:17.726 12:50:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.726 12:50:30 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:17.726 12:50:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.726 12:50:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.726 12:50:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.726 ************************************ 00:06:17.726 START TEST accel_compare 00:06:17.726 ************************************ 00:06:17.726 12:50:30 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:17.726 12:50:30 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:17.726 [2024-07-15 12:50:30.179557] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:17.726 [2024-07-15 12:50:30.179699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64116 ] 00:06:17.984 [2024-07-15 12:50:30.314759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.984 [2024-07-15 12:50:30.379096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:17.984 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.985 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.985 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:17.985 12:50:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:17.985 12:50:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:17.985 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:17.985 12:50:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.357 12:50:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.357 12:50:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.357 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.358 ************************************ 00:06:19.358 END TEST accel_compare 00:06:19.358 ************************************ 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:19.358 12:50:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.358 00:06:19.358 real 0m1.394s 00:06:19.358 user 0m1.214s 00:06:19.358 sys 0m0.077s 00:06:19.358 12:50:31 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.358 12:50:31 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 12:50:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.358 12:50:31 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:19.358 12:50:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.358 12:50:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.358 12:50:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 ************************************ 00:06:19.358 START TEST accel_xor 00:06:19.358 ************************************ 00:06:19.358 12:50:31 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:19.358 12:50:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:19.358 [2024-07-15 12:50:31.611367] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:19.358 [2024-07-15 12:50:31.611486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64151 ] 00:06:19.358 [2024-07-15 12:50:31.765388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.616 [2024-07-15 12:50:31.852834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.616 12:50:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:20.573 12:50:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.573 00:06:20.573 real 0m1.448s 00:06:20.573 user 0m1.263s 00:06:20.573 sys 0m0.084s 00:06:20.573 12:50:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.573 12:50:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:20.573 ************************************ 00:06:20.573 END TEST accel_xor 00:06:20.573 ************************************ 00:06:20.831 12:50:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.831 12:50:33 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:20.831 12:50:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:20.831 12:50:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.831 12:50:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.831 ************************************ 00:06:20.831 START TEST accel_xor 00:06:20.831 ************************************ 00:06:20.831 12:50:33 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:20.831 12:50:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:20.831 [2024-07-15 12:50:33.100542] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:20.831 [2024-07-15 12:50:33.100668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64185 ] 00:06:20.831 [2024-07-15 12:50:33.239646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.089 [2024-07-15 12:50:33.312081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.089 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.090 12:50:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.020 ************************************ 00:06:22.020 END TEST accel_xor 00:06:22.020 ************************************ 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:22.020 12:50:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.020 00:06:22.020 real 0m1.396s 00:06:22.020 user 0m1.216s 00:06:22.020 sys 0m0.079s 00:06:22.020 12:50:34 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.020 12:50:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:22.286 12:50:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.286 12:50:34 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:22.286 12:50:34 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:22.286 12:50:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.286 12:50:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.286 ************************************ 00:06:22.286 START TEST accel_dif_verify 00:06:22.286 ************************************ 00:06:22.286 12:50:34 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:22.286 12:50:34 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:22.286 [2024-07-15 12:50:34.530837] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:22.286 [2024-07-15 12:50:34.530941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64221 ] 00:06:22.286 [2024-07-15 12:50:34.674497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.286 [2024-07-15 12:50:34.739625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:22.543 12:50:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:23.474 ************************************ 00:06:23.474 END TEST accel_dif_verify 00:06:23.474 ************************************ 00:06:23.474 12:50:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.474 00:06:23.474 real 0m1.391s 00:06:23.474 user 0m1.212s 00:06:23.474 sys 0m0.079s 00:06:23.474 12:50:35 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.474 12:50:35 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:23.474 12:50:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.474 12:50:35 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:23.474 12:50:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:23.474 12:50:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.474 12:50:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.731 ************************************ 00:06:23.731 START TEST accel_dif_generate 00:06:23.731 ************************************ 00:06:23.731 12:50:35 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:23.731 12:50:35 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:23.731 [2024-07-15 12:50:35.965409] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:23.731 [2024-07-15 12:50:35.965539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64250 ] 00:06:23.731 [2024-07-15 12:50:36.105206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.731 [2024-07-15 12:50:36.179000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.989 12:50:36 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:23.990 12:50:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:24.922 12:50:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.922 00:06:24.922 real 0m1.402s 00:06:24.922 user 0m1.214s 00:06:24.922 sys 0m0.090s 00:06:24.922 ************************************ 00:06:24.922 END TEST accel_dif_generate 00:06:24.922 ************************************ 00:06:24.922 12:50:37 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.922 12:50:37 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:24.922 12:50:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.922 12:50:37 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:24.922 12:50:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:24.922 12:50:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.922 12:50:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.922 ************************************ 00:06:24.922 START TEST accel_dif_generate_copy 00:06:24.922 ************************************ 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:24.922 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:25.180 [2024-07-15 12:50:37.403482] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:25.180 [2024-07-15 12:50:37.403608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64284 ] 00:06:25.180 [2024-07-15 12:50:37.542488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.180 [2024-07-15 12:50:37.602848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.180 12:50:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.555 00:06:26.555 real 0m1.377s 00:06:26.555 user 0m0.012s 00:06:26.555 sys 0m0.003s 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.555 ************************************ 00:06:26.555 END TEST accel_dif_generate_copy 00:06:26.555 ************************************ 00:06:26.555 12:50:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:26.555 12:50:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.555 12:50:38 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:26.555 12:50:38 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.555 12:50:38 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:26.555 12:50:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.555 12:50:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.555 ************************************ 00:06:26.555 START TEST accel_comp 00:06:26.555 ************************************ 00:06:26.555 12:50:38 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:26.555 12:50:38 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:26.555 [2024-07-15 12:50:38.810698] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:26.555 [2024-07-15 12:50:38.810841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64319 ] 00:06:26.555 [2024-07-15 12:50:38.948200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.555 [2024-07-15 12:50:39.015933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:26.813 12:50:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:27.747 12:50:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.747 00:06:27.747 real 0m1.378s 00:06:27.747 user 0m1.205s 00:06:27.747 sys 0m0.078s 00:06:27.747 12:50:40 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.747 12:50:40 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:27.747 ************************************ 00:06:27.747 END TEST accel_comp 00:06:27.747 ************************************ 00:06:27.747 12:50:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.747 12:50:40 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.747 12:50:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:27.747 12:50:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.747 12:50:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.747 ************************************ 00:06:27.747 START TEST accel_decomp 00:06:27.747 ************************************ 00:06:27.747 12:50:40 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:27.747 12:50:40 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:28.006 [2024-07-15 12:50:40.227826] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:28.006 [2024-07-15 12:50:40.227913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64348 ] 00:06:28.006 [2024-07-15 12:50:40.360087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.006 [2024-07-15 12:50:40.420123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:28.007 12:50:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.381 12:50:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.381 12:50:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.381 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.381 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.381 12:50:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.381 12:50:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.381 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.381 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.382 12:50:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.382 00:06:29.382 real 0m1.368s 00:06:29.382 user 0m1.202s 00:06:29.382 sys 0m0.072s 00:06:29.382 12:50:41 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.382 ************************************ 00:06:29.382 12:50:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:29.382 END TEST accel_decomp 00:06:29.382 ************************************ 00:06:29.382 12:50:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.382 12:50:41 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:29.382 12:50:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:29.382 12:50:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.382 12:50:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.382 ************************************ 00:06:29.382 START TEST accel_decomp_full 00:06:29.382 ************************************ 00:06:29.382 12:50:41 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:29.382 12:50:41 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:29.382 [2024-07-15 12:50:41.641598] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:29.382 [2024-07-15 12:50:41.641703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64388 ] 00:06:29.382 [2024-07-15 12:50:41.778920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.640 [2024-07-15 12:50:41.848387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.640 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:29.641 12:50:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.574 12:50:43 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.574 00:06:30.574 real 0m1.397s 00:06:30.574 user 0m1.223s 00:06:30.574 sys 0m0.080s 00:06:30.574 12:50:43 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.574 ************************************ 00:06:30.574 END TEST accel_decomp_full 00:06:30.574 ************************************ 00:06:30.574 12:50:43 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:30.832 12:50:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.832 12:50:43 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:30.832 12:50:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:30.832 12:50:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.832 12:50:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.832 ************************************ 00:06:30.832 START TEST accel_decomp_mcore 00:06:30.832 ************************************ 00:06:30.832 12:50:43 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:30.832 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:30.832 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:30.832 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:30.833 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:30.833 [2024-07-15 12:50:43.079330] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:30.833 [2024-07-15 12:50:43.079453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64417 ] 00:06:30.833 [2024-07-15 12:50:43.221250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.833 [2024-07-15 12:50:43.295356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.833 [2024-07-15 12:50:43.295438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.833 [2024-07-15 12:50:43.295508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.833 [2024-07-15 12:50:43.295510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 12:50:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.024 00:06:32.024 real 0m1.432s 00:06:32.024 user 0m0.015s 00:06:32.024 sys 0m0.002s 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.024 12:50:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:32.024 ************************************ 00:06:32.024 END TEST accel_decomp_mcore 00:06:32.024 ************************************ 00:06:32.282 12:50:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.282 12:50:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.282 12:50:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:32.282 12:50:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.282 12:50:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.282 ************************************ 00:06:32.282 START TEST accel_decomp_full_mcore 00:06:32.282 ************************************ 00:06:32.282 12:50:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.282 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:32.283 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:32.283 [2024-07-15 12:50:44.554720] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:32.283 [2024-07-15 12:50:44.554857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64460 ] 00:06:32.283 [2024-07-15 12:50:44.697015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.541 [2024-07-15 12:50:44.758906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.541 [2024-07-15 12:50:44.758985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.541 [2024-07-15 12:50:44.759074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.541 [2024-07-15 12:50:44.759077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:32.541 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.542 12:50:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.509 00:06:33.509 real 0m1.406s 00:06:33.509 user 0m4.476s 00:06:33.509 sys 0m0.092s 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.509 ************************************ 00:06:33.509 END TEST accel_decomp_full_mcore 00:06:33.509 12:50:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:33.509 ************************************ 00:06:33.793 12:50:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.793 12:50:45 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.793 12:50:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:33.793 12:50:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.793 12:50:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.793 ************************************ 00:06:33.794 START TEST accel_decomp_mthread 00:06:33.794 ************************************ 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:33.794 12:50:45 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:33.794 [2024-07-15 12:50:45.995867] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:33.794 [2024-07-15 12:50:45.995984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64492 ] 00:06:33.794 [2024-07-15 12:50:46.141934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.794 [2024-07-15 12:50:46.227318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.053 12:50:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.987 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.988 00:06:34.988 real 0m1.412s 00:06:34.988 user 0m1.239s 00:06:34.988 sys 0m0.078s 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.988 12:50:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:34.988 ************************************ 00:06:34.988 END TEST accel_decomp_mthread 00:06:34.988 ************************************ 00:06:34.988 12:50:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.988 12:50:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.988 12:50:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:34.988 12:50:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.988 12:50:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.988 ************************************ 00:06:34.988 START TEST accel_decomp_full_mthread 00:06:34.988 ************************************ 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:34.988 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:34.988 [2024-07-15 12:50:47.449248] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:34.988 [2024-07-15 12:50:47.449373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64521 ] 00:06:35.247 [2024-07-15 12:50:47.606142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.247 [2024-07-15 12:50:47.693193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:35.505 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.506 12:50:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.439 00:06:36.439 real 0m1.449s 00:06:36.439 user 0m1.268s 00:06:36.439 sys 0m0.085s 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.439 ************************************ 00:06:36.439 END TEST accel_decomp_full_mthread 00:06:36.439 ************************************ 00:06:36.439 12:50:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:36.697 12:50:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.697 12:50:48 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:36.697 12:50:48 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:36.697 12:50:48 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:36.697 12:50:48 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:36.697 12:50:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.697 12:50:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.697 12:50:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.697 12:50:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.697 12:50:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.697 12:50:48 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.697 12:50:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.697 12:50:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:36.697 12:50:48 accel -- accel/accel.sh@41 -- # jq -r . 00:06:36.697 ************************************ 00:06:36.697 START TEST accel_dif_functional_tests 00:06:36.697 ************************************ 00:06:36.697 12:50:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:36.697 [2024-07-15 12:50:48.967322] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:36.697 [2024-07-15 12:50:48.967410] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64562 ] 00:06:36.697 [2024-07-15 12:50:49.099136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.698 [2024-07-15 12:50:49.161084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.698 [2024-07-15 12:50:49.161156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.698 [2024-07-15 12:50:49.161151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.955 00:06:36.955 00:06:36.955 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.955 http://cunit.sourceforge.net/ 00:06:36.955 00:06:36.955 00:06:36.955 Suite: accel_dif 00:06:36.955 Test: verify: DIF generated, GUARD check ...passed 00:06:36.955 Test: verify: DIF generated, APPTAG check ...passed 00:06:36.955 Test: verify: DIF generated, REFTAG check ...passed 00:06:36.955 Test: verify: DIF not generated, GUARD check ...[2024-07-15 12:50:49.212301] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:36.955 passed 00:06:36.955 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 12:50:49.212452] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:36.955 passed 00:06:36.955 Test: verify: DIF not generated, REFTAG check ...passed 00:06:36.955 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:36.955 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 12:50:49.212514] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:36.955 passed 00:06:36.955 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:36.955 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:36.955 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-15 12:50:49.212604] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:36.955 passed 00:06:36.955 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:36.955 Test: verify copy: DIF generated, GUARD check ...[2024-07-15 12:50:49.212807] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:36.955 passed 00:06:36.955 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:36.955 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:36.955 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:36.955 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 12:50:49.213066] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:36.955 [2024-07-15 12:50:49.213138] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:36.955 passed 00:06:36.955 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:36.955 Test: generate copy: DIF generated, GUARD check ...passed 00:06:36.956 Test: generate copy: DIF generated, APTTAG check ...[2024-07-15 12:50:49.213187] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:36.956 passed 00:06:36.956 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:36.956 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:36.956 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:36.956 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:36.956 Test: generate copy: iovecs-len validate ...[2024-07-15 12:50:49.213501] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:36.956 passed 00:06:36.956 Test: generate copy: buffer alignment validate ...passed 00:06:36.956 00:06:36.956 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.956 suites 1 1 n/a 0 0 00:06:36.956 tests 26 26 26 0 0 00:06:36.956 asserts 115 115 115 0 n/a 00:06:36.956 00:06:36.956 Elapsed time = 0.003 seconds 00:06:36.956 00:06:36.956 real 0m0.457s 00:06:36.956 user 0m0.538s 00:06:36.956 sys 0m0.106s 00:06:36.956 12:50:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.956 12:50:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:36.956 ************************************ 00:06:36.956 END TEST accel_dif_functional_tests 00:06:36.956 ************************************ 00:06:36.956 12:50:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.956 ************************************ 00:06:36.956 END TEST accel 00:06:36.956 ************************************ 00:06:36.956 00:06:36.956 real 0m31.382s 00:06:36.956 user 0m33.531s 00:06:36.956 sys 0m2.862s 00:06:36.956 12:50:49 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.956 12:50:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.214 12:50:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.214 12:50:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:37.214 12:50:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.214 12:50:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.214 12:50:49 -- common/autotest_common.sh@10 -- # set +x 00:06:37.214 ************************************ 00:06:37.214 START TEST accel_rpc 00:06:37.214 ************************************ 00:06:37.214 12:50:49 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:37.214 * Looking for test storage... 00:06:37.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:37.214 12:50:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.214 12:50:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64621 00:06:37.214 12:50:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64621 00:06:37.214 12:50:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:37.214 12:50:49 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64621 ']' 00:06:37.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.214 12:50:49 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.214 12:50:49 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.214 12:50:49 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.214 12:50:49 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.214 12:50:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.214 [2024-07-15 12:50:49.574884] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:37.214 [2024-07-15 12:50:49.574995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64621 ] 00:06:37.473 [2024-07-15 12:50:49.705514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.473 [2024-07-15 12:50:49.794304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:38.407 12:50:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:38.407 12:50:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:38.407 12:50:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:38.407 12:50:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:38.407 12:50:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.407 ************************************ 00:06:38.407 START TEST accel_assign_opcode 00:06:38.407 ************************************ 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.407 [2024-07-15 12:50:50.555003] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.407 [2024-07-15 12:50:50.562999] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.407 software 00:06:38.407 00:06:38.407 real 0m0.217s 00:06:38.407 user 0m0.055s 00:06:38.407 sys 0m0.012s 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.407 ************************************ 00:06:38.407 END TEST accel_assign_opcode 00:06:38.407 ************************************ 00:06:38.407 12:50:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:38.407 12:50:50 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64621 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64621 ']' 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64621 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64621 00:06:38.407 killing process with pid 64621 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64621' 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@967 -- # kill 64621 00:06:38.407 12:50:50 accel_rpc -- common/autotest_common.sh@972 -- # wait 64621 00:06:38.665 00:06:38.665 real 0m1.663s 00:06:38.666 user 0m1.860s 00:06:38.666 sys 0m0.325s 00:06:38.666 12:50:51 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.666 ************************************ 00:06:38.666 END TEST accel_rpc 00:06:38.666 ************************************ 00:06:38.666 12:50:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.923 12:50:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.923 12:50:51 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:38.923 12:50:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.923 12:50:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.923 12:50:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.923 ************************************ 00:06:38.923 START TEST app_cmdline 00:06:38.923 ************************************ 00:06:38.923 12:50:51 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:38.923 * Looking for test storage... 00:06:38.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:38.923 12:50:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:38.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.923 12:50:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64733 00:06:38.923 12:50:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.923 12:50:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64733 00:06:38.923 12:50:51 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64733 ']' 00:06:38.923 12:50:51 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.923 12:50:51 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.923 12:50:51 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.923 12:50:51 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.923 12:50:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.923 [2024-07-15 12:50:51.303047] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:38.923 [2024-07-15 12:50:51.303172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64733 ] 00:06:39.183 [2024-07-15 12:50:51.450938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.183 [2024-07-15 12:50:51.540336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.114 12:50:52 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.114 12:50:52 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:40.114 12:50:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:40.114 { 00:06:40.114 "fields": { 00:06:40.115 "commit": "a62e924c8", 00:06:40.115 "major": 24, 00:06:40.115 "minor": 9, 00:06:40.115 "patch": 0, 00:06:40.115 "suffix": "-pre" 00:06:40.115 }, 00:06:40.115 "version": "SPDK v24.09-pre git sha1 a62e924c8" 00:06:40.115 } 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:40.372 12:50:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:40.372 12:50:52 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.629 2024/07/15 12:50:52 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:40.629 request: 00:06:40.629 { 00:06:40.630 "method": "env_dpdk_get_mem_stats", 00:06:40.630 "params": {} 00:06:40.630 } 00:06:40.630 Got JSON-RPC error response 00:06:40.630 GoRPCClient: error on JSON-RPC call 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.630 12:50:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64733 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64733 ']' 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64733 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64733 00:06:40.630 killing process with pid 64733 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64733' 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@967 -- # kill 64733 00:06:40.630 12:50:52 app_cmdline -- common/autotest_common.sh@972 -- # wait 64733 00:06:40.888 00:06:40.888 real 0m2.077s 00:06:40.888 user 0m2.782s 00:06:40.888 sys 0m0.400s 00:06:40.888 12:50:53 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.888 ************************************ 00:06:40.888 END TEST app_cmdline 00:06:40.888 ************************************ 00:06:40.888 12:50:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.888 12:50:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.888 12:50:53 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.888 12:50:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.888 12:50:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.888 12:50:53 -- common/autotest_common.sh@10 -- # set +x 00:06:40.888 ************************************ 00:06:40.888 START TEST version 00:06:40.888 ************************************ 00:06:40.888 12:50:53 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.888 * Looking for test storage... 00:06:40.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.888 12:50:53 version -- app/version.sh@17 -- # get_header_version major 00:06:40.888 12:50:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.888 12:50:53 version -- app/version.sh@14 -- # cut -f2 00:06:40.888 12:50:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.888 12:50:53 version -- app/version.sh@17 -- # major=24 00:06:40.888 12:50:53 version -- app/version.sh@18 -- # get_header_version minor 00:06:40.888 12:50:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.888 12:50:53 version -- app/version.sh@14 -- # cut -f2 00:06:40.888 12:50:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.888 12:50:53 version -- app/version.sh@18 -- # minor=9 00:06:40.888 12:50:53 version -- app/version.sh@19 -- # get_header_version patch 00:06:40.888 12:50:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.888 12:50:53 version -- app/version.sh@14 -- # cut -f2 00:06:40.888 12:50:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:40.888 12:50:53 version -- app/version.sh@19 -- # patch=0 00:06:41.160 12:50:53 version -- app/version.sh@20 -- # get_header_version suffix 00:06:41.160 12:50:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.160 12:50:53 version -- app/version.sh@14 -- # cut -f2 00:06:41.160 12:50:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.160 12:50:53 version -- app/version.sh@20 -- # suffix=-pre 00:06:41.160 12:50:53 version -- app/version.sh@22 -- # version=24.9 00:06:41.160 12:50:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.160 12:50:53 version -- app/version.sh@28 -- # version=24.9rc0 00:06:41.161 12:50:53 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:41.161 12:50:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.161 12:50:53 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:41.161 12:50:53 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:41.161 00:06:41.161 real 0m0.128s 00:06:41.161 user 0m0.081s 00:06:41.161 sys 0m0.074s 00:06:41.161 12:50:53 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.161 12:50:53 version -- common/autotest_common.sh@10 -- # set +x 00:06:41.161 ************************************ 00:06:41.161 END TEST version 00:06:41.161 ************************************ 00:06:41.161 12:50:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.161 12:50:53 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:41.161 12:50:53 -- spdk/autotest.sh@198 -- # uname -s 00:06:41.161 12:50:53 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:41.161 12:50:53 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:41.161 12:50:53 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:41.161 12:50:53 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:41.161 12:50:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:41.161 12:50:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:41.161 12:50:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:41.161 12:50:53 -- common/autotest_common.sh@10 -- # set +x 00:06:41.161 12:50:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:41.161 12:50:53 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:41.161 12:50:53 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:41.161 12:50:53 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:41.161 12:50:53 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:41.161 12:50:53 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:41.161 12:50:53 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:41.161 12:50:53 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.161 12:50:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.161 12:50:53 -- common/autotest_common.sh@10 -- # set +x 00:06:41.161 ************************************ 00:06:41.161 START TEST nvmf_tcp 00:06:41.161 ************************************ 00:06:41.161 12:50:53 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:41.161 * Looking for test storage... 00:06:41.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.161 12:50:53 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.161 12:50:53 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.161 12:50:53 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.161 12:50:53 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.161 12:50:53 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.161 12:50:53 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.161 12:50:53 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:41.161 12:50:53 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.161 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:41.161 12:50:53 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.161 12:50:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:41.161 12:50:53 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:41.161 12:50:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.161 12:50:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.161 12:50:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.161 ************************************ 00:06:41.161 START TEST nvmf_example 00:06:41.161 ************************************ 00:06:41.161 12:50:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:41.420 * Looking for test storage... 00:06:41.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.420 12:50:53 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.421 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # '[' '' -eq 1 ']' 00:06:41.421 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh: line 11: [: : integer expression expected 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@16 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@18 -- # MALLOC_BDEV_SIZE=64 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@19 -- # MALLOC_BLOCK_SIZE=512 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # build_nvmf_example_args 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@22 -- # '[' 0 -eq 1 ']' 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@25 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@26 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # timing_enter nvmf_example_test 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@46 -- # nvmftestinit 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@452 -- # prepare_net_devs 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # local -g is_hw=no 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # remove_spdk_ns 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@436 -- # nvmf_veth_init 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:06:41.421 Cannot find device "nvmf_init_br" 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:06:41.421 Cannot find device "nvmf_tgt_br" 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:06:41.421 Cannot find device "nvmf_tgt_br2" 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:06:41.421 Cannot find device "nvmf_init_br" 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:06:41.421 Cannot find device "nvmf_tgt_br" 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:06:41.421 Cannot find device "nvmf_tgt_br2" 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:06:41.421 Cannot find device "nvmf_br" 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@164 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:06:41.421 Cannot find device "nvmf_init_if" 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@165 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:41.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:41.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@167 -- # true 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:06:41.421 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:41.422 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:41.679 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:41.679 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:06:41.679 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:06:41.679 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:06:41.679 12:50:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:06:41.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:06:41.679 00:06:41.679 --- 10.0.0.2 ping statistics --- 00:06:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.679 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:06:41.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:41.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:06:41.679 00:06:41.679 --- 10.0.0.3 ping statistics --- 00:06:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.679 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:41.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:06:41.679 00:06:41.679 --- 10.0.0.1 ping statistics --- 00:06:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.679 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@437 -- # return 0 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.679 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # nvmfexamplestart '-m 0xF' 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@32 -- # timing_enter start_nvmf_example 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # '[' tcp == tcp ']' 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@39 -- # nvmfpid=65099 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@38 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # waitforlisten 65099 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 65099 ']' 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.680 12:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # timing_exit start_nvmf_example 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # rpc_cmd bdev_malloc_create 64 512 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # malloc_bdevs='Malloc0 ' 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # for malloc_bdev in $malloc_bdevs 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@58 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@62 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@64 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:43.051 12:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:55.246 Initializing NVMe Controllers 00:06:55.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:55.246 Initialization complete. Launching workers. 00:06:55.246 ======================================================== 00:06:55.246 Latency(us) 00:06:55.246 Device Information : IOPS MiB/s Average min max 00:06:55.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13553.68 52.94 4722.65 930.13 23162.82 00:06:55.246 ======================================================== 00:06:55.246 Total : 13553.68 52.94 4722.65 930.13 23162.82 00:06:55.246 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@70 -- # trap - SIGINT SIGTERM EXIT 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@71 -- # nvmftestfini 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # nvmfcleanup 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # sync 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:55.246 rmmod nvme_tcp 00:06:55.246 rmmod nvme_fabrics 00:06:55.246 rmmod nvme_keyring 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@493 -- # '[' -n 65099 ']' 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@494 -- # killprocess 65099 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 65099 ']' 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 65099 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65099 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65099' 00:06:55.246 killing process with pid 65099 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 65099 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 65099 00:06:55.246 nvmf threads initialize successfully 00:06:55.246 bdev subsystem init successfully 00:06:55.246 created a nvmf target service 00:06:55.246 create targets's poll groups done 00:06:55.246 all subsystems of target started 00:06:55.246 nvmf target is running 00:06:55.246 all subsystems of target stopped 00:06:55.246 destroy targets's poll groups done 00:06:55.246 destroyed the nvmf target service 00:06:55.246 bdev subsystem finish successfully 00:06:55.246 nvmf threads destroy successfully 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@282 -- # remove_spdk_ns 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:06:55.246 12:51:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@72 -- # timing_exit nvmf_example_test 00:06:55.247 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.247 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.247 ************************************ 00:06:55.247 END TEST nvmf_example 00:06:55.247 ************************************ 00:06:55.247 00:06:55.247 real 0m12.383s 00:06:55.247 user 0m44.531s 00:06:55.247 sys 0m2.037s 00:06:55.247 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.247 12:51:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.247 12:51:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:55.247 12:51:05 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:55.247 12:51:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:55.247 12:51:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.247 12:51:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.247 ************************************ 00:06:55.247 START TEST nvmf_filesystem 00:06:55.247 ************************************ 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:55.247 * Looking for test storage... 00:06:55.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:55.247 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:55.247 #define SPDK_CONFIG_H 00:06:55.247 #define SPDK_CONFIG_APPS 1 00:06:55.247 #define SPDK_CONFIG_ARCH native 00:06:55.247 #undef SPDK_CONFIG_ASAN 00:06:55.247 #define SPDK_CONFIG_AVAHI 1 00:06:55.248 #undef SPDK_CONFIG_CET 00:06:55.248 #define SPDK_CONFIG_COVERAGE 1 00:06:55.248 #define SPDK_CONFIG_CROSS_PREFIX 00:06:55.248 #undef SPDK_CONFIG_CRYPTO 00:06:55.248 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:55.248 #undef SPDK_CONFIG_CUSTOMOCF 00:06:55.248 #undef SPDK_CONFIG_DAOS 00:06:55.248 #define SPDK_CONFIG_DAOS_DIR 00:06:55.248 #define SPDK_CONFIG_DEBUG 1 00:06:55.248 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:55.248 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:55.248 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:55.248 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:55.248 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:55.248 #undef SPDK_CONFIG_DPDK_UADK 00:06:55.248 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:55.248 #define SPDK_CONFIG_EXAMPLES 1 00:06:55.248 #undef SPDK_CONFIG_FC 00:06:55.248 #define SPDK_CONFIG_FC_PATH 00:06:55.248 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:55.248 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:55.248 #undef SPDK_CONFIG_FUSE 00:06:55.248 #undef SPDK_CONFIG_FUZZER 00:06:55.248 #define SPDK_CONFIG_FUZZER_LIB 00:06:55.248 #define SPDK_CONFIG_GOLANG 1 00:06:55.248 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:55.248 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:55.248 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:55.248 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:55.248 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:55.248 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:55.248 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:55.248 #define SPDK_CONFIG_IDXD 1 00:06:55.248 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:55.248 #undef SPDK_CONFIG_IPSEC_MB 00:06:55.248 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:55.248 #define SPDK_CONFIG_ISAL 1 00:06:55.248 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:55.248 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:55.248 #define SPDK_CONFIG_LIBDIR 00:06:55.248 #undef SPDK_CONFIG_LTO 00:06:55.248 #define SPDK_CONFIG_MAX_LCORES 128 00:06:55.248 #define SPDK_CONFIG_NVME_CUSE 1 00:06:55.248 #undef SPDK_CONFIG_OCF 00:06:55.248 #define SPDK_CONFIG_OCF_PATH 00:06:55.248 #define SPDK_CONFIG_OPENSSL_PATH 00:06:55.248 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:55.248 #define SPDK_CONFIG_PGO_DIR 00:06:55.248 #undef SPDK_CONFIG_PGO_USE 00:06:55.248 #define SPDK_CONFIG_PREFIX /usr/local 00:06:55.248 #undef SPDK_CONFIG_RAID5F 00:06:55.248 #undef SPDK_CONFIG_RBD 00:06:55.248 #define SPDK_CONFIG_RDMA 1 00:06:55.248 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:55.248 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:55.248 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:55.248 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:55.248 #define SPDK_CONFIG_SHARED 1 00:06:55.248 #undef SPDK_CONFIG_SMA 00:06:55.248 #define SPDK_CONFIG_TESTS 1 00:06:55.248 #undef SPDK_CONFIG_TSAN 00:06:55.248 #define SPDK_CONFIG_UBLK 1 00:06:55.248 #define SPDK_CONFIG_UBSAN 1 00:06:55.248 #undef SPDK_CONFIG_UNIT_TESTS 00:06:55.248 #undef SPDK_CONFIG_URING 00:06:55.248 #define SPDK_CONFIG_URING_PATH 00:06:55.248 #undef SPDK_CONFIG_URING_ZNS 00:06:55.248 #define SPDK_CONFIG_USDT 1 00:06:55.248 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:55.248 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:55.248 #undef SPDK_CONFIG_VFIO_USER 00:06:55.248 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:55.248 #define SPDK_CONFIG_VHOST 1 00:06:55.248 #define SPDK_CONFIG_VIRTIO 1 00:06:55.248 #undef SPDK_CONFIG_VTUNE 00:06:55.248 #define SPDK_CONFIG_VTUNE_DIR 00:06:55.248 #define SPDK_CONFIG_WERROR 1 00:06:55.248 #define SPDK_CONFIG_WPDK_DIR 00:06:55.248 #undef SPDK_CONFIG_XNVME 00:06:55.248 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:55.248 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.249 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65343 ]] 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65343 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.TWJhp3 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.TWJhp3/tests/target /tmp/spdk.TWJhp3 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264508416 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267883520 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13786447872 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5243502592 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267748352 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=139264 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13786447872 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5243502592 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora38-libvirt/output 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=94683267072 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5019512832 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:55.250 * Looking for test storage... 00:06:55.250 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13786447872 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.251 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@452 -- # prepare_net_devs 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # local -g is_hw=no 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # remove_spdk_ns 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@436 -- # nvmf_veth_init 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:06:55.251 Cannot find device "nvmf_tgt_br" 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:06:55.251 Cannot find device "nvmf_tgt_br2" 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # true 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:06:55.251 Cannot find device "nvmf_tgt_br" 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:06:55.251 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:06:55.251 Cannot find device "nvmf_tgt_br2" 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:55.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:55.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:06:55.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:06:55.252 00:06:55.252 --- 10.0.0.2 ping statistics --- 00:06:55.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.252 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:06:55.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:55.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:06:55.252 00:06:55.252 --- 10.0.0.3 ping statistics --- 00:06:55.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.252 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:55.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:06:55.252 00:06:55.252 --- 10.0.0.1 ping statistics --- 00:06:55.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.252 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@437 -- # return 0 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.252 ************************************ 00:06:55.252 START TEST nvmf_filesystem_no_in_capsule 00:06:55.252 ************************************ 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@485 -- # nvmfpid=65498 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@486 -- # waitforlisten 65498 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65498 ']' 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.252 12:51:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.252 [2024-07-15 12:51:06.677792] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:06:55.252 [2024-07-15 12:51:06.677884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.252 [2024-07-15 12:51:06.813571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.252 [2024-07-15 12:51:06.906743] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.252 [2024-07-15 12:51:06.906824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.252 [2024-07-15 12:51:06.906837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.252 [2024-07-15 12:51:06.906845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.252 [2024-07-15 12:51:06.906853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.252 [2024-07-15 12:51:06.906941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.252 [2024-07-15 12:51:06.907021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.252 [2024-07-15 12:51:06.907102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.252 [2024-07-15 12:51:06.907474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.510 [2024-07-15 12:51:07.787109] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.510 Malloc1 00:06:55.510 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.511 [2024-07-15 12:51:07.915161] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:55.511 { 00:06:55.511 "aliases": [ 00:06:55.511 "f3e95cb9-a712-43e7-8cd7-3e4c9dc465fb" 00:06:55.511 ], 00:06:55.511 "assigned_rate_limits": { 00:06:55.511 "r_mbytes_per_sec": 0, 00:06:55.511 "rw_ios_per_sec": 0, 00:06:55.511 "rw_mbytes_per_sec": 0, 00:06:55.511 "w_mbytes_per_sec": 0 00:06:55.511 }, 00:06:55.511 "block_size": 512, 00:06:55.511 "claim_type": "exclusive_write", 00:06:55.511 "claimed": true, 00:06:55.511 "driver_specific": {}, 00:06:55.511 "memory_domains": [ 00:06:55.511 { 00:06:55.511 "dma_device_id": "system", 00:06:55.511 "dma_device_type": 1 00:06:55.511 }, 00:06:55.511 { 00:06:55.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.511 "dma_device_type": 2 00:06:55.511 } 00:06:55.511 ], 00:06:55.511 "name": "Malloc1", 00:06:55.511 "num_blocks": 1048576, 00:06:55.511 "product_name": "Malloc disk", 00:06:55.511 "supported_io_types": { 00:06:55.511 "abort": true, 00:06:55.511 "compare": false, 00:06:55.511 "compare_and_write": false, 00:06:55.511 "copy": true, 00:06:55.511 "flush": true, 00:06:55.511 "get_zone_info": false, 00:06:55.511 "nvme_admin": false, 00:06:55.511 "nvme_io": false, 00:06:55.511 "nvme_io_md": false, 00:06:55.511 "nvme_iov_md": false, 00:06:55.511 "read": true, 00:06:55.511 "reset": true, 00:06:55.511 "seek_data": false, 00:06:55.511 "seek_hole": false, 00:06:55.511 "unmap": true, 00:06:55.511 "write": true, 00:06:55.511 "write_zeroes": true, 00:06:55.511 "zcopy": true, 00:06:55.511 "zone_append": false, 00:06:55.511 "zone_management": false 00:06:55.511 }, 00:06:55.511 "uuid": "f3e95cb9-a712-43e7-8cd7-3e4c9dc465fb", 00:06:55.511 "zoned": false 00:06:55.511 } 00:06:55.511 ]' 00:06:55.511 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:55.769 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:55.769 12:51:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:55.769 12:51:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:58.316 12:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.244 ************************************ 00:06:59.244 START TEST filesystem_ext4 00:06:59.244 ************************************ 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:59.244 mke2fs 1.46.5 (30-Dec-2021) 00:06:59.244 Discarding device blocks: 0/522240 done 00:06:59.244 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:59.244 Filesystem UUID: f7008d19-4b01-416e-a40d-71e2a00c0558 00:06:59.244 Superblock backups stored on blocks: 00:06:59.244 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:59.244 00:06:59.244 Allocating group tables: 0/64 done 00:06:59.244 Writing inode tables: 0/64 done 00:06:59.244 Creating journal (8192 blocks): done 00:06:59.244 Writing superblocks and filesystem accounting information: 0/64 done 00:06:59.244 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:59.244 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:59.245 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:59.245 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:59.245 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:59.245 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:59.245 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:59.245 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65498 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:59.502 00:06:59.502 real 0m0.345s 00:06:59.502 user 0m0.024s 00:06:59.502 sys 0m0.048s 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:59.502 ************************************ 00:06:59.502 END TEST filesystem_ext4 00:06:59.502 ************************************ 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.502 ************************************ 00:06:59.502 START TEST filesystem_btrfs 00:06:59.502 ************************************ 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:59.502 btrfs-progs v6.6.2 00:06:59.502 See https://btrfs.readthedocs.io for more information. 00:06:59.502 00:06:59.502 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:59.502 NOTE: several default settings have changed in version 5.15, please make sure 00:06:59.502 this does not affect your deployments: 00:06:59.502 - DUP for metadata (-m dup) 00:06:59.502 - enabled no-holes (-O no-holes) 00:06:59.502 - enabled free-space-tree (-R free-space-tree) 00:06:59.502 00:06:59.502 Label: (null) 00:06:59.502 UUID: 9059e3e9-5abe-418f-9e10-06f9566fb257 00:06:59.502 Node size: 16384 00:06:59.502 Sector size: 4096 00:06:59.502 Filesystem size: 510.00MiB 00:06:59.502 Block group profiles: 00:06:59.502 Data: single 8.00MiB 00:06:59.502 Metadata: DUP 32.00MiB 00:06:59.502 System: DUP 8.00MiB 00:06:59.502 SSD detected: yes 00:06:59.502 Zoned device: no 00:06:59.502 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:59.502 Runtime features: free-space-tree 00:06:59.502 Checksum: crc32c 00:06:59.502 Number of devices: 1 00:06:59.502 Devices: 00:06:59.502 ID SIZE PATH 00:06:59.502 1 510.00MiB /dev/nvme0n1p1 00:06:59.502 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:59.502 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:59.760 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65498 00:06:59.760 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:59.760 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:59.760 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:59.760 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:59.760 00:06:59.760 real 0m0.178s 00:06:59.760 user 0m0.022s 00:06:59.760 sys 0m0.056s 00:06:59.760 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.760 12:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:59.760 ************************************ 00:06:59.760 END TEST filesystem_btrfs 00:06:59.760 ************************************ 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.760 ************************************ 00:06:59.760 START TEST filesystem_xfs 00:06:59.760 ************************************ 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:59.760 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:59.760 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:59.760 = sectsz=512 attr=2, projid32bit=1 00:06:59.760 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:59.760 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:59.760 data = bsize=4096 blocks=130560, imaxpct=25 00:06:59.760 = sunit=0 swidth=0 blks 00:06:59.760 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:59.760 log =internal log bsize=4096 blocks=16384, version=2 00:06:59.760 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:59.760 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:00.324 Discarding blocks...Done. 00:07:00.324 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:00.324 12:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65498 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:02.862 00:07:02.862 real 0m3.289s 00:07:02.862 user 0m0.019s 00:07:02.862 sys 0m0.052s 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.862 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:02.862 ************************************ 00:07:02.862 END TEST filesystem_xfs 00:07:02.862 ************************************ 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:03.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65498 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65498 ']' 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65498 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65498 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.186 killing process with pid 65498 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65498' 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65498 00:07:03.186 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65498 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:03.444 00:07:03.444 real 0m9.164s 00:07:03.444 user 0m34.449s 00:07:03.444 sys 0m1.635s 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.444 ************************************ 00:07:03.444 END TEST nvmf_filesystem_no_in_capsule 00:07:03.444 ************************************ 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.444 ************************************ 00:07:03.444 START TEST nvmf_filesystem_in_capsule 00:07:03.444 ************************************ 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@485 -- # nvmfpid=65810 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@486 -- # waitforlisten 65810 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65810 ']' 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.444 12:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.444 [2024-07-15 12:51:15.874335] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:07:03.444 [2024-07-15 12:51:15.874444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.702 [2024-07-15 12:51:16.010445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.702 [2024-07-15 12:51:16.071831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.702 [2024-07-15 12:51:16.071886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.702 [2024-07-15 12:51:16.071897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.702 [2024-07-15 12:51:16.071905] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.702 [2024-07-15 12:51:16.071913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.702 [2024-07-15 12:51:16.071996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.702 [2024-07-15 12:51:16.074795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.702 [2024-07-15 12:51:16.074882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.702 [2024-07-15 12:51:16.074893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.636 12:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.636 12:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:04.636 12:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:04.636 12:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:04.636 12:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.636 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.636 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:04.636 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:04.636 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.636 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.636 [2024-07-15 12:51:17.021367] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.636 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.636 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:04.636 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.636 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.895 Malloc1 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.895 [2024-07-15 12:51:17.150596] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:04.895 { 00:07:04.895 "aliases": [ 00:07:04.895 "959f7c1b-0ace-4911-9d17-71c49f5eadf8" 00:07:04.895 ], 00:07:04.895 "assigned_rate_limits": { 00:07:04.895 "r_mbytes_per_sec": 0, 00:07:04.895 "rw_ios_per_sec": 0, 00:07:04.895 "rw_mbytes_per_sec": 0, 00:07:04.895 "w_mbytes_per_sec": 0 00:07:04.895 }, 00:07:04.895 "block_size": 512, 00:07:04.895 "claim_type": "exclusive_write", 00:07:04.895 "claimed": true, 00:07:04.895 "driver_specific": {}, 00:07:04.895 "memory_domains": [ 00:07:04.895 { 00:07:04.895 "dma_device_id": "system", 00:07:04.895 "dma_device_type": 1 00:07:04.895 }, 00:07:04.895 { 00:07:04.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.895 "dma_device_type": 2 00:07:04.895 } 00:07:04.895 ], 00:07:04.895 "name": "Malloc1", 00:07:04.895 "num_blocks": 1048576, 00:07:04.895 "product_name": "Malloc disk", 00:07:04.895 "supported_io_types": { 00:07:04.895 "abort": true, 00:07:04.895 "compare": false, 00:07:04.895 "compare_and_write": false, 00:07:04.895 "copy": true, 00:07:04.895 "flush": true, 00:07:04.895 "get_zone_info": false, 00:07:04.895 "nvme_admin": false, 00:07:04.895 "nvme_io": false, 00:07:04.895 "nvme_io_md": false, 00:07:04.895 "nvme_iov_md": false, 00:07:04.895 "read": true, 00:07:04.895 "reset": true, 00:07:04.895 "seek_data": false, 00:07:04.895 "seek_hole": false, 00:07:04.895 "unmap": true, 00:07:04.895 "write": true, 00:07:04.895 "write_zeroes": true, 00:07:04.895 "zcopy": true, 00:07:04.895 "zone_append": false, 00:07:04.895 "zone_management": false 00:07:04.895 }, 00:07:04.895 "uuid": "959f7c1b-0ace-4911-9d17-71c49f5eadf8", 00:07:04.895 "zoned": false 00:07:04.895 } 00:07:04.895 ]' 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:04.895 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.153 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.153 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:05.153 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.153 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:05.153 12:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:07.052 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:07.310 12:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.244 ************************************ 00:07:08.244 START TEST filesystem_in_capsule_ext4 00:07:08.244 ************************************ 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:08.244 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:08.244 mke2fs 1.46.5 (30-Dec-2021) 00:07:08.245 Discarding device blocks: 0/522240 done 00:07:08.245 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:08.245 Filesystem UUID: 71fcb46a-f866-4239-a6d9-db36e8478b2c 00:07:08.245 Superblock backups stored on blocks: 00:07:08.245 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:08.245 00:07:08.245 Allocating group tables: 0/64 done 00:07:08.245 Writing inode tables: 0/64 done 00:07:08.245 Creating journal (8192 blocks): done 00:07:08.245 Writing superblocks and filesystem accounting information: 0/64 done 00:07:08.245 00:07:08.245 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:08.245 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65810 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.503 00:07:08.503 real 0m0.312s 00:07:08.503 user 0m0.021s 00:07:08.503 sys 0m0.046s 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:08.503 ************************************ 00:07:08.503 END TEST filesystem_in_capsule_ext4 00:07:08.503 ************************************ 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.503 ************************************ 00:07:08.503 START TEST filesystem_in_capsule_btrfs 00:07:08.503 ************************************ 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:08.503 12:51:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:08.762 btrfs-progs v6.6.2 00:07:08.762 See https://btrfs.readthedocs.io for more information. 00:07:08.762 00:07:08.762 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:08.762 NOTE: several default settings have changed in version 5.15, please make sure 00:07:08.762 this does not affect your deployments: 00:07:08.762 - DUP for metadata (-m dup) 00:07:08.762 - enabled no-holes (-O no-holes) 00:07:08.762 - enabled free-space-tree (-R free-space-tree) 00:07:08.762 00:07:08.762 Label: (null) 00:07:08.762 UUID: 25ca1e5f-9e3d-4031-977d-4c744b7f1793 00:07:08.762 Node size: 16384 00:07:08.762 Sector size: 4096 00:07:08.762 Filesystem size: 510.00MiB 00:07:08.762 Block group profiles: 00:07:08.762 Data: single 8.00MiB 00:07:08.762 Metadata: DUP 32.00MiB 00:07:08.762 System: DUP 8.00MiB 00:07:08.762 SSD detected: yes 00:07:08.762 Zoned device: no 00:07:08.762 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:08.762 Runtime features: free-space-tree 00:07:08.762 Checksum: crc32c 00:07:08.762 Number of devices: 1 00:07:08.762 Devices: 00:07:08.762 ID SIZE PATH 00:07:08.762 1 510.00MiB /dev/nvme0n1p1 00:07:08.762 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65810 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.762 00:07:08.762 real 0m0.186s 00:07:08.762 user 0m0.020s 00:07:08.762 sys 0m0.058s 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:08.762 ************************************ 00:07:08.762 END TEST filesystem_in_capsule_btrfs 00:07:08.762 ************************************ 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.762 ************************************ 00:07:08.762 START TEST filesystem_in_capsule_xfs 00:07:08.762 ************************************ 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:08.762 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:08.763 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:08.763 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:08.763 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:08.763 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:08.763 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:08.763 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:09.020 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:09.021 = sectsz=512 attr=2, projid32bit=1 00:07:09.021 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:09.021 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:09.021 data = bsize=4096 blocks=130560, imaxpct=25 00:07:09.021 = sunit=0 swidth=0 blks 00:07:09.021 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:09.021 log =internal log bsize=4096 blocks=16384, version=2 00:07:09.021 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:09.021 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:09.586 Discarding blocks...Done. 00:07:09.586 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:09.586 12:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65810 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:11.486 00:07:11.486 real 0m2.577s 00:07:11.486 user 0m0.014s 00:07:11.486 sys 0m0.054s 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:11.486 ************************************ 00:07:11.486 END TEST filesystem_in_capsule_xfs 00:07:11.486 ************************************ 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65810 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65810 ']' 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65810 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65810 00:07:11.486 killing process with pid 65810 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65810' 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65810 00:07:11.486 12:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65810 00:07:11.745 12:51:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:11.745 00:07:11.745 real 0m8.376s 00:07:11.745 user 0m31.655s 00:07:11.745 sys 0m1.485s 00:07:11.745 12:51:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.745 ************************************ 00:07:11.745 END TEST nvmf_filesystem_in_capsule 00:07:11.745 ************************************ 00:07:11.745 12:51:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:12.003 rmmod nvme_tcp 00:07:12.003 rmmod nvme_fabrics 00:07:12.003 rmmod nvme_keyring 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:07:12.003 00:07:12.003 real 0m18.330s 00:07:12.003 user 1m6.344s 00:07:12.003 sys 0m3.474s 00:07:12.003 ************************************ 00:07:12.003 END TEST nvmf_filesystem 00:07:12.003 ************************************ 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.003 12:51:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.003 12:51:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:12.003 12:51:24 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:12.003 12:51:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:12.003 12:51:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.003 12:51:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.003 ************************************ 00:07:12.003 START TEST nvmf_target_discovery 00:07:12.003 ************************************ 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:12.004 * Looking for test storage... 00:07:12.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.004 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.004 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@436 -- # nvmf_veth_init 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:07:12.263 Cannot find device "nvmf_tgt_br" 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:07:12.263 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:07:12.264 Cannot find device "nvmf_tgt_br2" 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # true 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:07:12.264 Cannot find device "nvmf_tgt_br" 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:07:12.264 Cannot find device "nvmf_tgt_br2" 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:12.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:12.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:07:12.264 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:07:12.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:07:12.522 00:07:12.522 --- 10.0.0.2 ping statistics --- 00:07:12.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.522 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:07:12.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:12.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:07:12.522 00:07:12.522 --- 10.0.0.3 ping statistics --- 00:07:12.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.522 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:12.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:07:12.522 00:07:12.522 --- 10.0.0.1 ping statistics --- 00:07:12.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.522 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@437 -- # return 0 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@485 -- # nvmfpid=66259 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@486 -- # waitforlisten 66259 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66259 ']' 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.522 12:51:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:12.522 [2024-07-15 12:51:24.894108] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:07:12.522 [2024-07-15 12:51:24.894255] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.779 [2024-07-15 12:51:25.035531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.779 [2024-07-15 12:51:25.122550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.779 [2024-07-15 12:51:25.122628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.779 [2024-07-15 12:51:25.122646] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.779 [2024-07-15 12:51:25.122659] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.779 [2024-07-15 12:51:25.122670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.779 [2024-07-15 12:51:25.122798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.779 [2024-07-15 12:51:25.123258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.779 [2024-07-15 12:51:25.123658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.779 [2024-07-15 12:51:25.123681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.716 12:51:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.716 12:51:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:13.716 12:51:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:13.716 12:51:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.716 12:51:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.716 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.716 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:13.716 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.716 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.716 [2024-07-15 12:51:26.025747] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.716 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 Null1 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 [2024-07-15 12:51:26.084378] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 Null2 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 Null3 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 Null4 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.717 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 4420 00:07:13.977 00:07:13.977 Discovery Log Number of Records 6, Generation counter 6 00:07:13.977 =====Discovery Log Entry 0====== 00:07:13.977 trtype: tcp 00:07:13.977 adrfam: ipv4 00:07:13.977 subtype: current discovery subsystem 00:07:13.977 treq: not required 00:07:13.977 portid: 0 00:07:13.977 trsvcid: 4420 00:07:13.977 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:13.977 traddr: 10.0.0.2 00:07:13.977 eflags: explicit discovery connections, duplicate discovery information 00:07:13.977 sectype: none 00:07:13.977 =====Discovery Log Entry 1====== 00:07:13.977 trtype: tcp 00:07:13.977 adrfam: ipv4 00:07:13.977 subtype: nvme subsystem 00:07:13.977 treq: not required 00:07:13.977 portid: 0 00:07:13.977 trsvcid: 4420 00:07:13.977 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:13.977 traddr: 10.0.0.2 00:07:13.977 eflags: none 00:07:13.977 sectype: none 00:07:13.977 =====Discovery Log Entry 2====== 00:07:13.977 trtype: tcp 00:07:13.977 adrfam: ipv4 00:07:13.977 subtype: nvme subsystem 00:07:13.977 treq: not required 00:07:13.977 portid: 0 00:07:13.977 trsvcid: 4420 00:07:13.977 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:13.977 traddr: 10.0.0.2 00:07:13.977 eflags: none 00:07:13.977 sectype: none 00:07:13.977 =====Discovery Log Entry 3====== 00:07:13.977 trtype: tcp 00:07:13.977 adrfam: ipv4 00:07:13.977 subtype: nvme subsystem 00:07:13.977 treq: not required 00:07:13.977 portid: 0 00:07:13.977 trsvcid: 4420 00:07:13.977 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:13.977 traddr: 10.0.0.2 00:07:13.977 eflags: none 00:07:13.977 sectype: none 00:07:13.977 =====Discovery Log Entry 4====== 00:07:13.977 trtype: tcp 00:07:13.977 adrfam: ipv4 00:07:13.977 subtype: nvme subsystem 00:07:13.977 treq: not required 00:07:13.977 portid: 0 00:07:13.977 trsvcid: 4420 00:07:13.977 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:13.977 traddr: 10.0.0.2 00:07:13.977 eflags: none 00:07:13.977 sectype: none 00:07:13.977 =====Discovery Log Entry 5====== 00:07:13.977 trtype: tcp 00:07:13.977 adrfam: ipv4 00:07:13.977 subtype: discovery subsystem referral 00:07:13.977 treq: not required 00:07:13.977 portid: 0 00:07:13.977 trsvcid: 4430 00:07:13.977 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:13.977 traddr: 10.0.0.2 00:07:13.977 eflags: none 00:07:13.977 sectype: none 00:07:13.977 Perform nvmf subsystem discovery via RPC 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.977 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.977 [ 00:07:13.977 { 00:07:13.977 "allow_any_host": true, 00:07:13.977 "hosts": [], 00:07:13.977 "listen_addresses": [ 00:07:13.977 { 00:07:13.977 "adrfam": "IPv4", 00:07:13.977 "traddr": "10.0.0.2", 00:07:13.977 "trsvcid": "4420", 00:07:13.977 "trtype": "TCP" 00:07:13.977 } 00:07:13.977 ], 00:07:13.977 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:13.977 "subtype": "Discovery" 00:07:13.977 }, 00:07:13.977 { 00:07:13.977 "allow_any_host": true, 00:07:13.977 "hosts": [], 00:07:13.977 "listen_addresses": [ 00:07:13.977 { 00:07:13.977 "adrfam": "IPv4", 00:07:13.977 "traddr": "10.0.0.2", 00:07:13.977 "trsvcid": "4420", 00:07:13.977 "trtype": "TCP" 00:07:13.977 } 00:07:13.977 ], 00:07:13.978 "max_cntlid": 65519, 00:07:13.978 "max_namespaces": 32, 00:07:13.978 "min_cntlid": 1, 00:07:13.978 "model_number": "SPDK bdev Controller", 00:07:13.978 "namespaces": [ 00:07:13.978 { 00:07:13.978 "bdev_name": "Null1", 00:07:13.978 "name": "Null1", 00:07:13.978 "nguid": "D5E63413F88342D283C5E24437C8C87C", 00:07:13.978 "nsid": 1, 00:07:13.978 "uuid": "d5e63413-f883-42d2-83c5-e24437c8c87c" 00:07:13.978 } 00:07:13.978 ], 00:07:13.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:13.978 "serial_number": "SPDK00000000000001", 00:07:13.978 "subtype": "NVMe" 00:07:13.978 }, 00:07:13.978 { 00:07:13.978 "allow_any_host": true, 00:07:13.978 "hosts": [], 00:07:13.978 "listen_addresses": [ 00:07:13.978 { 00:07:13.978 "adrfam": "IPv4", 00:07:13.978 "traddr": "10.0.0.2", 00:07:13.978 "trsvcid": "4420", 00:07:13.978 "trtype": "TCP" 00:07:13.978 } 00:07:13.978 ], 00:07:13.978 "max_cntlid": 65519, 00:07:13.978 "max_namespaces": 32, 00:07:13.978 "min_cntlid": 1, 00:07:13.978 "model_number": "SPDK bdev Controller", 00:07:13.978 "namespaces": [ 00:07:13.978 { 00:07:13.978 "bdev_name": "Null2", 00:07:13.978 "name": "Null2", 00:07:13.978 "nguid": "E8F2D08835D24803B1B792B58A54FD7D", 00:07:13.978 "nsid": 1, 00:07:13.978 "uuid": "e8f2d088-35d2-4803-b1b7-92b58a54fd7d" 00:07:13.978 } 00:07:13.978 ], 00:07:13.978 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:13.978 "serial_number": "SPDK00000000000002", 00:07:13.978 "subtype": "NVMe" 00:07:13.978 }, 00:07:13.978 { 00:07:13.978 "allow_any_host": true, 00:07:13.978 "hosts": [], 00:07:13.978 "listen_addresses": [ 00:07:13.978 { 00:07:13.978 "adrfam": "IPv4", 00:07:13.978 "traddr": "10.0.0.2", 00:07:13.978 "trsvcid": "4420", 00:07:13.978 "trtype": "TCP" 00:07:13.978 } 00:07:13.978 ], 00:07:13.978 "max_cntlid": 65519, 00:07:13.978 "max_namespaces": 32, 00:07:13.978 "min_cntlid": 1, 00:07:13.978 "model_number": "SPDK bdev Controller", 00:07:13.978 "namespaces": [ 00:07:13.978 { 00:07:13.978 "bdev_name": "Null3", 00:07:13.978 "name": "Null3", 00:07:13.978 "nguid": "0D5DEE8F9C5B4FC880A6DD079AE95E12", 00:07:13.978 "nsid": 1, 00:07:13.978 "uuid": "0d5dee8f-9c5b-4fc8-80a6-dd079ae95e12" 00:07:13.978 } 00:07:13.978 ], 00:07:13.978 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:13.978 "serial_number": "SPDK00000000000003", 00:07:13.978 "subtype": "NVMe" 00:07:13.978 }, 00:07:13.978 { 00:07:13.978 "allow_any_host": true, 00:07:13.978 "hosts": [], 00:07:13.978 "listen_addresses": [ 00:07:13.978 { 00:07:13.978 "adrfam": "IPv4", 00:07:13.978 "traddr": "10.0.0.2", 00:07:13.978 "trsvcid": "4420", 00:07:13.978 "trtype": "TCP" 00:07:13.978 } 00:07:13.978 ], 00:07:13.978 "max_cntlid": 65519, 00:07:13.978 "max_namespaces": 32, 00:07:13.978 "min_cntlid": 1, 00:07:13.978 "model_number": "SPDK bdev Controller", 00:07:13.978 "namespaces": [ 00:07:13.978 { 00:07:13.978 "bdev_name": "Null4", 00:07:13.978 "name": "Null4", 00:07:13.978 "nguid": "CD0119493A85446696A1E1E3951500B2", 00:07:13.978 "nsid": 1, 00:07:13.978 "uuid": "cd011949-3a85-4466-96a1-e1e3951500b2" 00:07:13.978 } 00:07:13.978 ], 00:07:13.978 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:13.978 "serial_number": "SPDK00000000000004", 00:07:13.978 "subtype": "NVMe" 00:07:13.978 } 00:07:13.978 ] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:13.978 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:14.237 rmmod nvme_tcp 00:07:14.237 rmmod nvme_fabrics 00:07:14.237 rmmod nvme_keyring 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@493 -- # '[' -n 66259 ']' 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@494 -- # killprocess 66259 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66259 ']' 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66259 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66259 00:07:14.237 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.238 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.238 killing process with pid 66259 00:07:14.238 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66259' 00:07:14.238 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66259 00:07:14.238 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66259 00:07:14.496 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:14.496 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:14.496 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:14.496 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.496 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:14.496 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.496 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.496 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.496 12:51:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:07:14.496 00:07:14.497 real 0m2.401s 00:07:14.497 user 0m6.817s 00:07:14.497 sys 0m0.546s 00:07:14.497 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.497 12:51:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:14.497 ************************************ 00:07:14.497 END TEST nvmf_target_discovery 00:07:14.497 ************************************ 00:07:14.497 12:51:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:14.497 12:51:26 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:14.497 12:51:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:14.497 12:51:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.497 12:51:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.497 ************************************ 00:07:14.497 START TEST nvmf_referrals 00:07:14.497 ************************************ 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:14.497 * Looking for test storage... 00:07:14.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.497 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@436 -- # nvmf_veth_init 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:07:14.497 Cannot find device "nvmf_tgt_br" 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:14.497 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:07:14.756 Cannot find device "nvmf_tgt_br2" 00:07:14.756 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # true 00:07:14.756 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:07:14.756 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:07:14.756 Cannot find device "nvmf_tgt_br" 00:07:14.756 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:14.756 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:07:14.756 Cannot find device "nvmf_tgt_br2" 00:07:14.756 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:14.756 12:51:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:14.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:14.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:07:14.756 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:15.014 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:15.014 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:15.014 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:15.014 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:07:15.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:07:15.015 00:07:15.015 --- 10.0.0.2 ping statistics --- 00:07:15.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.015 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:07:15.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:15.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:07:15.015 00:07:15.015 --- 10.0.0.3 ping statistics --- 00:07:15.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.015 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:15.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:07:15.015 00:07:15.015 --- 10.0.0.1 ping statistics --- 00:07:15.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.015 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@437 -- # return 0 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@485 -- # nvmfpid=66493 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@486 -- # waitforlisten 66493 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66493 ']' 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.015 12:51:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:15.015 [2024-07-15 12:51:27.374594] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:07:15.015 [2024-07-15 12:51:27.374739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.273 [2024-07-15 12:51:27.533809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.273 [2024-07-15 12:51:27.617959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.273 [2024-07-15 12:51:27.618016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.273 [2024-07-15 12:51:27.618028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.273 [2024-07-15 12:51:27.618037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.273 [2024-07-15 12:51:27.618044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.273 [2024-07-15 12:51:27.618154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.273 [2024-07-15 12:51:27.618220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.273 [2024-07-15 12:51:27.618696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.273 [2024-07-15 12:51:27.618710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.206 [2024-07-15 12:51:28.577421] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.206 [2024-07-15 12:51:28.605507] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.206 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.464 12:51:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.723 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:16.723 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:16.723 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:16.723 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:16.723 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.723 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:16.723 12:51:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.723 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:16.982 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:17.240 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:17.241 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:17.241 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:17.241 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:17.498 rmmod nvme_tcp 00:07:17.498 rmmod nvme_fabrics 00:07:17.498 rmmod nvme_keyring 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@493 -- # '[' -n 66493 ']' 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@494 -- # killprocess 66493 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66493 ']' 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66493 00:07:17.498 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:17.499 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.499 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66493 00:07:17.499 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.499 killing process with pid 66493 00:07:17.499 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.499 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66493' 00:07:17.499 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66493 00:07:17.499 12:51:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66493 00:07:17.756 12:51:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:07:17.757 00:07:17.757 real 0m3.247s 00:07:17.757 user 0m10.889s 00:07:17.757 sys 0m0.804s 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.757 12:51:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:17.757 ************************************ 00:07:17.757 END TEST nvmf_referrals 00:07:17.757 ************************************ 00:07:17.757 12:51:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:17.757 12:51:30 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:17.757 12:51:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:17.757 12:51:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.757 12:51:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.757 ************************************ 00:07:17.757 START TEST nvmf_connect_disconnect 00:07:17.757 ************************************ 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:17.757 * Looking for test storage... 00:07:17.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.757 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # nvmf_veth_init 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.757 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:07:18.015 Cannot find device "nvmf_tgt_br" 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:07:18.015 Cannot find device "nvmf_tgt_br2" 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # true 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:07:18.015 Cannot find device "nvmf_tgt_br" 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:07:18.015 Cannot find device "nvmf_tgt_br2" 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:18.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:18.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:18.015 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:18.272 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:18.272 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:07:18.272 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:07:18.272 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:07:18.272 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:07:18.272 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:07:18.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:07:18.273 00:07:18.273 --- 10.0.0.2 ping statistics --- 00:07:18.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.273 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:07:18.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:18.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:07:18.273 00:07:18.273 --- 10.0.0.3 ping statistics --- 00:07:18.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.273 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:18.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:07:18.273 00:07:18.273 --- 10.0.0.1 ping statistics --- 00:07:18.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.273 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@437 -- # return 0 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # nvmfpid=66796 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # waitforlisten 66796 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66796 ']' 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.273 12:51:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:18.273 [2024-07-15 12:51:30.713739] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:07:18.273 [2024-07-15 12:51:30.714652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.530 [2024-07-15 12:51:30.866100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.530 [2024-07-15 12:51:30.940142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.530 [2024-07-15 12:51:30.940222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.530 [2024-07-15 12:51:30.940245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.530 [2024-07-15 12:51:30.940264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.530 [2024-07-15 12:51:30.940279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.530 [2024-07-15 12:51:30.940388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.530 [2024-07-15 12:51:30.940821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.530 [2024-07-15 12:51:30.941187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.530 [2024-07-15 12:51:30.941206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.462 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.462 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:19.462 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:19.462 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.462 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.462 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.462 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:19.462 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.462 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.462 [2024-07-15 12:51:31.907659] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.719 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.720 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.720 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.720 [2024-07-15 12:51:31.972011] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.720 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.720 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:19.720 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:19.720 12:51:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:22.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.103 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:31.103 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:31.103 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:31.103 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:07:31.103 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:31.103 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:31.104 rmmod nvme_tcp 00:07:31.104 rmmod nvme_fabrics 00:07:31.104 rmmod nvme_keyring 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # '[' -n 66796 ']' 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # killprocess 66796 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66796 ']' 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66796 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66796 00:07:31.104 killing process with pid 66796 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66796' 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66796 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66796 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:07:31.104 00:07:31.104 real 0m13.380s 00:07:31.104 user 0m48.828s 00:07:31.104 sys 0m2.099s 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.104 12:51:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:31.104 ************************************ 00:07:31.104 END TEST nvmf_connect_disconnect 00:07:31.104 ************************************ 00:07:31.104 12:51:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:31.104 12:51:43 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:31.104 12:51:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:31.104 12:51:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.104 12:51:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.104 ************************************ 00:07:31.104 START TEST nvmf_multitarget 00:07:31.104 ************************************ 00:07:31.104 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:31.362 * Looking for test storage... 00:07:31.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.362 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@436 -- # nvmf_veth_init 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:31.362 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:07:31.363 Cannot find device "nvmf_tgt_br" 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.363 Cannot find device "nvmf_tgt_br2" 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # true 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:07:31.363 Cannot find device "nvmf_tgt_br" 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:07:31.363 Cannot find device "nvmf_tgt_br2" 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:31.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:31.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:31.363 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:07:31.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:07:31.621 00:07:31.621 --- 10.0.0.2 ping statistics --- 00:07:31.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.621 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:07:31.621 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:31.621 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:07:31.621 00:07:31.621 --- 10.0.0.3 ping statistics --- 00:07:31.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.621 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:31.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:07:31.621 00:07:31.621 --- 10.0.0.1 ping statistics --- 00:07:31.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.621 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@437 -- # return 0 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@485 -- # nvmfpid=67196 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@486 -- # waitforlisten 67196 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67196 ']' 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.621 12:51:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:31.621 [2024-07-15 12:51:44.047737] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:07:31.621 [2024-07-15 12:51:44.047876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.879 [2024-07-15 12:51:44.181980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.879 [2024-07-15 12:51:44.259650] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.879 [2024-07-15 12:51:44.259705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.879 [2024-07-15 12:51:44.259717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.879 [2024-07-15 12:51:44.259727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.879 [2024-07-15 12:51:44.259734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.879 [2024-07-15 12:51:44.259857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.879 [2024-07-15 12:51:44.260267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.879 [2024-07-15 12:51:44.260841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.879 [2024-07-15 12:51:44.260846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:32.812 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:33.069 "nvmf_tgt_1" 00:07:33.069 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:33.327 "nvmf_tgt_2" 00:07:33.327 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:33.327 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:33.327 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:33.327 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:33.585 true 00:07:33.585 12:51:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:33.585 true 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.842 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.842 rmmod nvme_tcp 00:07:34.098 rmmod nvme_fabrics 00:07:34.098 rmmod nvme_keyring 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@493 -- # '[' -n 67196 ']' 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@494 -- # killprocess 67196 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67196 ']' 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67196 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67196 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.098 killing process with pid 67196 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67196' 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67196 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67196 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.098 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.355 12:51:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:07:34.355 00:07:34.355 real 0m3.040s 00:07:34.355 user 0m10.532s 00:07:34.355 sys 0m0.651s 00:07:34.355 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.355 12:51:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:34.355 ************************************ 00:07:34.355 END TEST nvmf_multitarget 00:07:34.355 ************************************ 00:07:34.355 12:51:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:34.355 12:51:46 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:34.355 12:51:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.355 12:51:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.355 12:51:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.355 ************************************ 00:07:34.355 START TEST nvmf_rpc 00:07:34.355 ************************************ 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:34.355 * Looking for test storage... 00:07:34.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.355 12:51:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.356 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@436 -- # nvmf_veth_init 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:07:34.356 Cannot find device "nvmf_tgt_br" 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:07:34.356 Cannot find device "nvmf_tgt_br2" 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # true 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:07:34.356 Cannot find device "nvmf_tgt_br" 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:07:34.356 Cannot find device "nvmf_tgt_br2" 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:07:34.356 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:07:34.613 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:34.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:34.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.614 12:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:07:34.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:07:34.614 00:07:34.614 --- 10.0.0.2 ping statistics --- 00:07:34.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.614 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:07:34.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.614 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:07:34.614 00:07:34.614 --- 10.0.0.3 ping statistics --- 00:07:34.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.614 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:34.614 00:07:34.614 --- 10.0.0.1 ping statistics --- 00:07:34.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.614 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@437 -- # return 0 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@485 -- # nvmfpid=67428 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@486 -- # waitforlisten 67428 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67428 ']' 00:07:34.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.614 12:51:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.872 [2024-07-15 12:51:47.152453] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:07:34.872 [2024-07-15 12:51:47.152590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.872 [2024-07-15 12:51:47.299499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.135 [2024-07-15 12:51:47.370900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.135 [2024-07-15 12:51:47.370960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.135 [2024-07-15 12:51:47.370974] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.135 [2024-07-15 12:51:47.370985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.135 [2024-07-15 12:51:47.370994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.135 [2024-07-15 12:51:47.371097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.135 [2024-07-15 12:51:47.371186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.135 [2024-07-15 12:51:47.371547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.135 [2024-07-15 12:51:47.371555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:36.082 "poll_groups": [ 00:07:36.082 { 00:07:36.082 "admin_qpairs": 0, 00:07:36.082 "completed_nvme_io": 0, 00:07:36.082 "current_admin_qpairs": 0, 00:07:36.082 "current_io_qpairs": 0, 00:07:36.082 "io_qpairs": 0, 00:07:36.082 "name": "nvmf_tgt_poll_group_000", 00:07:36.082 "pending_bdev_io": 0, 00:07:36.082 "transports": [] 00:07:36.082 }, 00:07:36.082 { 00:07:36.082 "admin_qpairs": 0, 00:07:36.082 "completed_nvme_io": 0, 00:07:36.082 "current_admin_qpairs": 0, 00:07:36.082 "current_io_qpairs": 0, 00:07:36.082 "io_qpairs": 0, 00:07:36.082 "name": "nvmf_tgt_poll_group_001", 00:07:36.082 "pending_bdev_io": 0, 00:07:36.082 "transports": [] 00:07:36.082 }, 00:07:36.082 { 00:07:36.082 "admin_qpairs": 0, 00:07:36.082 "completed_nvme_io": 0, 00:07:36.082 "current_admin_qpairs": 0, 00:07:36.082 "current_io_qpairs": 0, 00:07:36.082 "io_qpairs": 0, 00:07:36.082 "name": "nvmf_tgt_poll_group_002", 00:07:36.082 "pending_bdev_io": 0, 00:07:36.082 "transports": [] 00:07:36.082 }, 00:07:36.082 { 00:07:36.082 "admin_qpairs": 0, 00:07:36.082 "completed_nvme_io": 0, 00:07:36.082 "current_admin_qpairs": 0, 00:07:36.082 "current_io_qpairs": 0, 00:07:36.082 "io_qpairs": 0, 00:07:36.082 "name": "nvmf_tgt_poll_group_003", 00:07:36.082 "pending_bdev_io": 0, 00:07:36.082 "transports": [] 00:07:36.082 } 00:07:36.082 ], 00:07:36.082 "tick_rate": 2200000000 00:07:36.082 }' 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.082 [2024-07-15 12:51:48.406785] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.082 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:36.082 "poll_groups": [ 00:07:36.082 { 00:07:36.082 "admin_qpairs": 0, 00:07:36.082 "completed_nvme_io": 0, 00:07:36.082 "current_admin_qpairs": 0, 00:07:36.082 "current_io_qpairs": 0, 00:07:36.082 "io_qpairs": 0, 00:07:36.082 "name": "nvmf_tgt_poll_group_000", 00:07:36.082 "pending_bdev_io": 0, 00:07:36.082 "transports": [ 00:07:36.082 { 00:07:36.082 "trtype": "TCP" 00:07:36.082 } 00:07:36.082 ] 00:07:36.082 }, 00:07:36.082 { 00:07:36.082 "admin_qpairs": 0, 00:07:36.082 "completed_nvme_io": 0, 00:07:36.082 "current_admin_qpairs": 0, 00:07:36.082 "current_io_qpairs": 0, 00:07:36.082 "io_qpairs": 0, 00:07:36.082 "name": "nvmf_tgt_poll_group_001", 00:07:36.082 "pending_bdev_io": 0, 00:07:36.082 "transports": [ 00:07:36.082 { 00:07:36.082 "trtype": "TCP" 00:07:36.082 } 00:07:36.082 ] 00:07:36.082 }, 00:07:36.082 { 00:07:36.082 "admin_qpairs": 0, 00:07:36.082 "completed_nvme_io": 0, 00:07:36.082 "current_admin_qpairs": 0, 00:07:36.082 "current_io_qpairs": 0, 00:07:36.082 "io_qpairs": 0, 00:07:36.082 "name": "nvmf_tgt_poll_group_002", 00:07:36.082 "pending_bdev_io": 0, 00:07:36.082 "transports": [ 00:07:36.082 { 00:07:36.082 "trtype": "TCP" 00:07:36.082 } 00:07:36.082 ] 00:07:36.082 }, 00:07:36.082 { 00:07:36.082 "admin_qpairs": 0, 00:07:36.082 "completed_nvme_io": 0, 00:07:36.082 "current_admin_qpairs": 0, 00:07:36.082 "current_io_qpairs": 0, 00:07:36.082 "io_qpairs": 0, 00:07:36.082 "name": "nvmf_tgt_poll_group_003", 00:07:36.082 "pending_bdev_io": 0, 00:07:36.082 "transports": [ 00:07:36.082 { 00:07:36.082 "trtype": "TCP" 00:07:36.082 } 00:07:36.082 ] 00:07:36.082 } 00:07:36.082 ], 00:07:36.082 "tick_rate": 2200000000 00:07:36.082 }' 00:07:36.083 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:36.083 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:36.083 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:36.083 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:36.083 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:36.083 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:36.083 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:36.083 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:36.083 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.341 Malloc1 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.341 [2024-07-15 12:51:48.646451] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.2 -s 4420 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.2 -s 4420 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.2 -s 4420 00:07:36.341 [2024-07-15 12:51:48.672687] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a' 00:07:36.341 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:36.341 could not add new controller: failed to write to nvme-fabrics device 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.341 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:36.599 12:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:36.599 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:36.599 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:36.599 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:36.599 12:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:38.506 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:38.506 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:38.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:38.507 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.764 [2024-07-15 12:51:50.974175] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a' 00:07:38.764 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:38.764 could not add new controller: failed to write to nvme-fabrics device 00:07:38.764 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:38.764 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.764 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.764 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.764 12:51:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:38.764 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.764 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.764 12:51:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.764 12:51:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.764 12:51:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.764 12:51:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:38.764 12:51:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.764 12:51:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:38.764 12:51:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:41.289 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:41.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.290 [2024-07-15 12:51:53.278322] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:41.290 12:51:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:43.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.193 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.451 [2024-07-15 12:51:55.665687] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:43.451 12:51:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 [2024-07-15 12:51:57.949018] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.978 12:51:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.978 12:51:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.978 12:51:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:45.978 12:51:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.978 12:51:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:45.978 12:51:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:47.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.874 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.875 [2024-07-15 12:52:00.220953] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.875 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.132 12:52:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:48.132 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:48.132 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:48.132 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:48.132 12:52:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:50.027 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:50.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.028 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.286 [2024-07-15 12:52:02.504271] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:50.286 12:52:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 [2024-07-15 12:52:04.803932] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.813 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 [2024-07-15 12:52:04.859981] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 [2024-07-15 12:52:04.907925] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 [2024-07-15 12:52:04.956087] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:52.814 12:52:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 [2024-07-15 12:52:05.012142] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:52.814 "poll_groups": [ 00:07:52.814 { 00:07:52.814 "admin_qpairs": 2, 00:07:52.814 "completed_nvme_io": 66, 00:07:52.814 "current_admin_qpairs": 0, 00:07:52.814 "current_io_qpairs": 0, 00:07:52.814 "io_qpairs": 16, 00:07:52.814 "name": "nvmf_tgt_poll_group_000", 00:07:52.814 "pending_bdev_io": 0, 00:07:52.814 "transports": [ 00:07:52.814 { 00:07:52.814 "trtype": "TCP" 00:07:52.814 } 00:07:52.814 ] 00:07:52.814 }, 00:07:52.814 { 00:07:52.814 "admin_qpairs": 3, 00:07:52.814 "completed_nvme_io": 116, 00:07:52.814 "current_admin_qpairs": 0, 00:07:52.814 "current_io_qpairs": 0, 00:07:52.814 "io_qpairs": 17, 00:07:52.814 "name": "nvmf_tgt_poll_group_001", 00:07:52.814 "pending_bdev_io": 0, 00:07:52.814 "transports": [ 00:07:52.814 { 00:07:52.814 "trtype": "TCP" 00:07:52.814 } 00:07:52.814 ] 00:07:52.814 }, 00:07:52.814 { 00:07:52.814 "admin_qpairs": 1, 00:07:52.814 "completed_nvme_io": 168, 00:07:52.814 "current_admin_qpairs": 0, 00:07:52.814 "current_io_qpairs": 0, 00:07:52.814 "io_qpairs": 19, 00:07:52.814 "name": "nvmf_tgt_poll_group_002", 00:07:52.814 "pending_bdev_io": 0, 00:07:52.814 "transports": [ 00:07:52.814 { 00:07:52.814 "trtype": "TCP" 00:07:52.814 } 00:07:52.814 ] 00:07:52.814 }, 00:07:52.814 { 00:07:52.814 "admin_qpairs": 1, 00:07:52.814 "completed_nvme_io": 70, 00:07:52.814 "current_admin_qpairs": 0, 00:07:52.814 "current_io_qpairs": 0, 00:07:52.814 "io_qpairs": 18, 00:07:52.814 "name": "nvmf_tgt_poll_group_003", 00:07:52.814 "pending_bdev_io": 0, 00:07:52.814 "transports": [ 00:07:52.814 { 00:07:52.814 "trtype": "TCP" 00:07:52.814 } 00:07:52.814 ] 00:07:52.814 } 00:07:52.814 ], 00:07:52.814 "tick_rate": 2200000000 00:07:52.814 }' 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:52.814 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:52.815 rmmod nvme_tcp 00:07:52.815 rmmod nvme_fabrics 00:07:52.815 rmmod nvme_keyring 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@493 -- # '[' -n 67428 ']' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@494 -- # killprocess 67428 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67428 ']' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67428 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67428 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67428' 00:07:52.815 killing process with pid 67428 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67428 00:07:52.815 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67428 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:07:53.073 00:07:53.073 real 0m18.913s 00:07:53.073 user 1m10.037s 00:07:53.073 sys 0m3.114s 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.073 12:52:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 ************************************ 00:07:53.073 END TEST nvmf_rpc 00:07:53.331 ************************************ 00:07:53.331 12:52:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:53.331 12:52:05 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:53.331 12:52:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.331 12:52:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.331 12:52:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.331 ************************************ 00:07:53.331 START TEST nvmf_invalid 00:07:53.331 ************************************ 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:53.331 * Looking for test storage... 00:07:53.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.331 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.331 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@436 -- # nvmf_veth_init 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:07:53.332 Cannot find device "nvmf_tgt_br" 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.332 Cannot find device "nvmf_tgt_br2" 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # true 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:07:53.332 Cannot find device "nvmf_tgt_br" 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:07:53.332 Cannot find device "nvmf_tgt_br2" 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:07:53.332 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:07:53.590 12:52:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.590 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.590 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.590 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.590 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.590 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:07:53.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:07:53.590 00:07:53.590 --- 10.0.0.2 ping statistics --- 00:07:53.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.590 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:07:53.590 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:07:53.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:07:53.848 00:07:53.848 --- 10.0.0.3 ping statistics --- 00:07:53.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.848 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:07:53.848 00:07:53.848 --- 10.0.0.1 ping statistics --- 00:07:53.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.848 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@437 -- # return 0 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@485 -- # nvmfpid=67944 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@486 -- # waitforlisten 67944 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67944 ']' 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.848 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:53.848 [2024-07-15 12:52:06.166095] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:07:53.848 [2024-07-15 12:52:06.166253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.848 [2024-07-15 12:52:06.313267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.107 [2024-07-15 12:52:06.403327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.107 [2024-07-15 12:52:06.403425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.107 [2024-07-15 12:52:06.403447] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.107 [2024-07-15 12:52:06.403462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.107 [2024-07-15 12:52:06.403474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.107 [2024-07-15 12:52:06.403619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.107 [2024-07-15 12:52:06.403725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.107 [2024-07-15 12:52:06.404256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.107 [2024-07-15 12:52:06.404281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.107 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.107 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:54.107 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:54.107 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.107 12:52:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:54.107 12:52:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.107 12:52:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:54.107 12:52:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20739 00:07:54.685 [2024-07-15 12:52:06.926453] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:54.685 12:52:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 12:52:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20739 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:54.685 request: 00:07:54.685 { 00:07:54.685 "method": "nvmf_create_subsystem", 00:07:54.685 "params": { 00:07:54.685 "nqn": "nqn.2016-06.io.spdk:cnode20739", 00:07:54.685 "tgt_name": "foobar" 00:07:54.685 } 00:07:54.685 } 00:07:54.685 Got JSON-RPC error response 00:07:54.685 GoRPCClient: error on JSON-RPC call' 00:07:54.685 12:52:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 12:52:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20739 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:54.685 request: 00:07:54.685 { 00:07:54.685 "method": "nvmf_create_subsystem", 00:07:54.685 "params": { 00:07:54.685 "nqn": "nqn.2016-06.io.spdk:cnode20739", 00:07:54.685 "tgt_name": "foobar" 00:07:54.685 } 00:07:54.685 } 00:07:54.685 Got JSON-RPC error response 00:07:54.685 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:54.685 12:52:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:54.685 12:52:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17158 00:07:54.944 [2024-07-15 12:52:07.258734] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17158: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:54.944 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 12:52:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17158 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:54.944 request: 00:07:54.944 { 00:07:54.944 "method": "nvmf_create_subsystem", 00:07:54.944 "params": { 00:07:54.944 "nqn": "nqn.2016-06.io.spdk:cnode17158", 00:07:54.944 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:54.944 } 00:07:54.944 } 00:07:54.945 Got JSON-RPC error response 00:07:54.945 GoRPCClient: error on JSON-RPC call' 00:07:54.945 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 12:52:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17158 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:54.945 request: 00:07:54.945 { 00:07:54.945 "method": "nvmf_create_subsystem", 00:07:54.945 "params": { 00:07:54.945 "nqn": "nqn.2016-06.io.spdk:cnode17158", 00:07:54.945 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:54.945 } 00:07:54.945 } 00:07:54.945 Got JSON-RPC error response 00:07:54.945 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:54.945 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:54.945 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16307 00:07:55.204 [2024-07-15 12:52:07.619034] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16307: invalid model number 'SPDK_Controller' 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 12:52:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16307], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:55.204 request: 00:07:55.204 { 00:07:55.204 "method": "nvmf_create_subsystem", 00:07:55.204 "params": { 00:07:55.204 "nqn": "nqn.2016-06.io.spdk:cnode16307", 00:07:55.204 "model_number": "SPDK_Controller\u001f" 00:07:55.204 } 00:07:55.204 } 00:07:55.204 Got JSON-RPC error response 00:07:55.204 GoRPCClient: error on JSON-RPC call' 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 12:52:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16307], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:55.204 request: 00:07:55.204 { 00:07:55.204 "method": "nvmf_create_subsystem", 00:07:55.204 "params": { 00:07:55.204 "nqn": "nqn.2016-06.io.spdk:cnode16307", 00:07:55.204 "model_number": "SPDK_Controller\u001f" 00:07:55.204 } 00:07:55.204 } 00:07:55.204 Got JSON-RPC error response 00:07:55.204 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.204 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:55.463 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:55.464 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:55.464 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.464 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.464 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:07:55.464 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'gyKLrpH,|U@rfF~s&E2QT' 00:07:55.464 12:52:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'gyKLrpH,|U@rfF~s&E2QT' nqn.2016-06.io.spdk:cnode13070 00:07:55.721 [2024-07-15 12:52:08.179550] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13070: invalid serial number 'gyKLrpH,|U@rfF~s&E2QT' 00:07:55.980 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 12:52:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13070 serial_number:gyKLrpH,|U@rfF~s&E2QT], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN gyKLrpH,|U@rfF~s&E2QT 00:07:55.980 request: 00:07:55.980 { 00:07:55.980 "method": "nvmf_create_subsystem", 00:07:55.980 "params": { 00:07:55.980 "nqn": "nqn.2016-06.io.spdk:cnode13070", 00:07:55.980 "serial_number": "gyKLrpH,|U@rfF~s&E2QT" 00:07:55.980 } 00:07:55.980 } 00:07:55.980 Got JSON-RPC error response 00:07:55.980 GoRPCClient: error on JSON-RPC call' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 12:52:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13070 serial_number:gyKLrpH,|U@rfF~s&E2QT], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN gyKLrpH,|U@rfF~s&E2QT 00:07:55.981 request: 00:07:55.981 { 00:07:55.981 "method": "nvmf_create_subsystem", 00:07:55.981 "params": { 00:07:55.981 "nqn": "nqn.2016-06.io.spdk:cnode13070", 00:07:55.981 "serial_number": "gyKLrpH,|U@rfF~s&E2QT" 00:07:55.981 } 00:07:55.981 } 00:07:55.981 Got JSON-RPC error response 00:07:55.981 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:55.981 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ V == \- ]] 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'VwGnVSvS\jHXQ@y+R$]HtZxhiW 7+$?lS~P (6&Xu' 00:07:55.982 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'VwGnVSvS\jHXQ@y+R$]HtZxhiW 7+$?lS~P (6&Xu' nqn.2016-06.io.spdk:cnode6975 00:07:56.549 [2024-07-15 12:52:08.760057] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6975: invalid model number 'VwGnVSvS\jHXQ@y+R$]HtZxhiW 7+$?lS~P (6&Xu' 00:07:56.549 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/15 12:52:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:VwGnVSvS\jHXQ@y+R$]HtZxhiW 7+$?lS~P (6&Xu nqn:nqn.2016-06.io.spdk:cnode6975], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN VwGnVSvS\jHXQ@y+R$]HtZxhiW 7+$?lS~P (6&Xu 00:07:56.549 request: 00:07:56.549 { 00:07:56.549 "method": "nvmf_create_subsystem", 00:07:56.549 "params": { 00:07:56.549 "nqn": "nqn.2016-06.io.spdk:cnode6975", 00:07:56.549 "model_number": "VwGnVSvS\\jHXQ@y+R$]HtZxhiW 7+$?lS~P (6&Xu" 00:07:56.549 } 00:07:56.549 } 00:07:56.549 Got JSON-RPC error response 00:07:56.549 GoRPCClient: error on JSON-RPC call' 00:07:56.549 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/15 12:52:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:VwGnVSvS\jHXQ@y+R$]HtZxhiW 7+$?lS~P (6&Xu nqn:nqn.2016-06.io.spdk:cnode6975], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN VwGnVSvS\jHXQ@y+R$]HtZxhiW 7+$?lS~P (6&Xu 00:07:56.549 request: 00:07:56.549 { 00:07:56.549 "method": "nvmf_create_subsystem", 00:07:56.549 "params": { 00:07:56.549 "nqn": "nqn.2016-06.io.spdk:cnode6975", 00:07:56.549 "model_number": "VwGnVSvS\\jHXQ@y+R$]HtZxhiW 7+$?lS~P (6&Xu" 00:07:56.549 } 00:07:56.549 } 00:07:56.549 Got JSON-RPC error response 00:07:56.549 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:56.549 12:52:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:56.807 [2024-07-15 12:52:09.172741] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.807 12:52:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:57.373 12:52:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:57.373 12:52:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:57.373 12:52:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:57.373 12:52:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:57.373 12:52:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:57.632 [2024-07-15 12:52:09.934625] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:57.632 12:52:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/15 12:52:09 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:57.632 request: 00:07:57.632 { 00:07:57.632 "method": "nvmf_subsystem_remove_listener", 00:07:57.632 "params": { 00:07:57.632 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:57.632 "listen_address": { 00:07:57.632 "trtype": "tcp", 00:07:57.632 "traddr": "", 00:07:57.632 "trsvcid": "4421" 00:07:57.632 } 00:07:57.632 } 00:07:57.632 } 00:07:57.632 Got JSON-RPC error response 00:07:57.632 GoRPCClient: error on JSON-RPC call' 00:07:57.632 12:52:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/15 12:52:09 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:57.632 request: 00:07:57.632 { 00:07:57.632 "method": "nvmf_subsystem_remove_listener", 00:07:57.632 "params": { 00:07:57.632 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:57.633 "listen_address": { 00:07:57.633 "trtype": "tcp", 00:07:57.633 "traddr": "", 00:07:57.633 "trsvcid": "4421" 00:07:57.633 } 00:07:57.633 } 00:07:57.633 } 00:07:57.633 Got JSON-RPC error response 00:07:57.633 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:57.633 12:52:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10331 -i 0 00:07:58.200 [2024-07-15 12:52:10.371092] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10331: invalid cntlid range [0-65519] 00:07:58.200 12:52:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/15 12:52:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode10331], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:58.200 request: 00:07:58.200 { 00:07:58.200 "method": "nvmf_create_subsystem", 00:07:58.200 "params": { 00:07:58.200 "nqn": "nqn.2016-06.io.spdk:cnode10331", 00:07:58.200 "min_cntlid": 0 00:07:58.200 } 00:07:58.200 } 00:07:58.200 Got JSON-RPC error response 00:07:58.200 GoRPCClient: error on JSON-RPC call' 00:07:58.200 12:52:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/15 12:52:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode10331], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:58.200 request: 00:07:58.200 { 00:07:58.200 "method": "nvmf_create_subsystem", 00:07:58.200 "params": { 00:07:58.200 "nqn": "nqn.2016-06.io.spdk:cnode10331", 00:07:58.200 "min_cntlid": 0 00:07:58.200 } 00:07:58.200 } 00:07:58.200 Got JSON-RPC error response 00:07:58.200 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:58.200 12:52:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode245 -i 65520 00:07:58.459 [2024-07-15 12:52:10.803652] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode245: invalid cntlid range [65520-65519] 00:07:58.459 12:52:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/15 12:52:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode245], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:58.459 request: 00:07:58.459 { 00:07:58.459 "method": "nvmf_create_subsystem", 00:07:58.459 "params": { 00:07:58.459 "nqn": "nqn.2016-06.io.spdk:cnode245", 00:07:58.459 "min_cntlid": 65520 00:07:58.459 } 00:07:58.459 } 00:07:58.459 Got JSON-RPC error response 00:07:58.459 GoRPCClient: error on JSON-RPC call' 00:07:58.459 12:52:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/15 12:52:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode245], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:58.459 request: 00:07:58.459 { 00:07:58.459 "method": "nvmf_create_subsystem", 00:07:58.459 "params": { 00:07:58.459 "nqn": "nqn.2016-06.io.spdk:cnode245", 00:07:58.459 "min_cntlid": 65520 00:07:58.459 } 00:07:58.459 } 00:07:58.459 Got JSON-RPC error response 00:07:58.459 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:58.459 12:52:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16394 -I 0 00:07:59.025 [2024-07-15 12:52:11.245029] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16394: invalid cntlid range [1-0] 00:07:59.025 12:52:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/15 12:52:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16394], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:59.025 request: 00:07:59.025 { 00:07:59.025 "method": "nvmf_create_subsystem", 00:07:59.025 "params": { 00:07:59.025 "nqn": "nqn.2016-06.io.spdk:cnode16394", 00:07:59.025 "max_cntlid": 0 00:07:59.025 } 00:07:59.025 } 00:07:59.025 Got JSON-RPC error response 00:07:59.025 GoRPCClient: error on JSON-RPC call' 00:07:59.025 12:52:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/15 12:52:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16394], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:59.025 request: 00:07:59.025 { 00:07:59.025 "method": "nvmf_create_subsystem", 00:07:59.025 "params": { 00:07:59.025 "nqn": "nqn.2016-06.io.spdk:cnode16394", 00:07:59.025 "max_cntlid": 0 00:07:59.025 } 00:07:59.025 } 00:07:59.025 Got JSON-RPC error response 00:07:59.025 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:59.025 12:52:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8035 -I 65520 00:07:59.283 [2024-07-15 12:52:11.657537] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8035: invalid cntlid range [1-65520] 00:07:59.283 12:52:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/15 12:52:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode8035], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:59.283 request: 00:07:59.283 { 00:07:59.283 "method": "nvmf_create_subsystem", 00:07:59.283 "params": { 00:07:59.283 "nqn": "nqn.2016-06.io.spdk:cnode8035", 00:07:59.283 "max_cntlid": 65520 00:07:59.283 } 00:07:59.283 } 00:07:59.283 Got JSON-RPC error response 00:07:59.283 GoRPCClient: error on JSON-RPC call' 00:07:59.283 12:52:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/15 12:52:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode8035], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:59.283 request: 00:07:59.283 { 00:07:59.283 "method": "nvmf_create_subsystem", 00:07:59.283 "params": { 00:07:59.283 "nqn": "nqn.2016-06.io.spdk:cnode8035", 00:07:59.283 "max_cntlid": 65520 00:07:59.283 } 00:07:59.283 } 00:07:59.283 Got JSON-RPC error response 00:07:59.283 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:59.283 12:52:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20742 -i 6 -I 5 00:07:59.541 [2024-07-15 12:52:11.969948] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20742: invalid cntlid range [6-5] 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/15 12:52:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode20742], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:59.799 request: 00:07:59.799 { 00:07:59.799 "method": "nvmf_create_subsystem", 00:07:59.799 "params": { 00:07:59.799 "nqn": "nqn.2016-06.io.spdk:cnode20742", 00:07:59.799 "min_cntlid": 6, 00:07:59.799 "max_cntlid": 5 00:07:59.799 } 00:07:59.799 } 00:07:59.799 Got JSON-RPC error response 00:07:59.799 GoRPCClient: error on JSON-RPC call' 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/15 12:52:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode20742], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:59.799 request: 00:07:59.799 { 00:07:59.799 "method": "nvmf_create_subsystem", 00:07:59.799 "params": { 00:07:59.799 "nqn": "nqn.2016-06.io.spdk:cnode20742", 00:07:59.799 "min_cntlid": 6, 00:07:59.799 "max_cntlid": 5 00:07:59.799 } 00:07:59.799 } 00:07:59.799 Got JSON-RPC error response 00:07:59.799 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:59.799 { 00:07:59.799 "name": "foobar", 00:07:59.799 "method": "nvmf_delete_target", 00:07:59.799 "req_id": 1 00:07:59.799 } 00:07:59.799 Got JSON-RPC error response 00:07:59.799 response: 00:07:59.799 { 00:07:59.799 "code": -32602, 00:07:59.799 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:59.799 }' 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:59.799 { 00:07:59.799 "name": "foobar", 00:07:59.799 "method": "nvmf_delete_target", 00:07:59.799 "req_id": 1 00:07:59.799 } 00:07:59.799 Got JSON-RPC error response 00:07:59.799 response: 00:07:59.799 { 00:07:59.799 "code": -32602, 00:07:59.799 "message": "The specified target doesn't exist, cannot delete it." 00:07:59.799 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.799 rmmod nvme_tcp 00:07:59.799 rmmod nvme_fabrics 00:07:59.799 rmmod nvme_keyring 00:07:59.799 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@493 -- # '[' -n 67944 ']' 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@494 -- # killprocess 67944 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 67944 ']' 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 67944 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67944 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67944' 00:08:00.058 killing process with pid 67944 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 67944 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 67944 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@282 -- # remove_spdk_ns 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:08:00.058 ************************************ 00:08:00.058 END TEST nvmf_invalid 00:08:00.058 ************************************ 00:08:00.058 00:08:00.058 real 0m6.930s 00:08:00.058 user 0m28.732s 00:08:00.058 sys 0m1.380s 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.058 12:52:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:00.317 12:52:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:00.317 12:52:12 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:00.317 12:52:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.317 12:52:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.317 12:52:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.317 ************************************ 00:08:00.317 START TEST nvmf_abort 00:08:00.317 ************************************ 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:00.317 * Looking for test storage... 00:08:00.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.317 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@452 -- # prepare_net_devs 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # local -g is_hw=no 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # remove_spdk_ns 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@436 -- # nvmf_veth_init 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:08:00.317 Cannot find device "nvmf_tgt_br" 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:08:00.317 Cannot find device "nvmf_tgt_br2" 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # true 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:08:00.317 Cannot find device "nvmf_tgt_br" 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:08:00.317 Cannot find device "nvmf_tgt_br2" 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:08:00.317 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:00.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # true 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@167 -- # true 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:08:00.576 12:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:08:00.576 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:00.576 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:00.576 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:00.576 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:00.576 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:08:00.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:08:00.576 00:08:00.576 --- 10.0.0.2 ping statistics --- 00:08:00.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.576 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:08:00.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:00.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:08:00.834 00:08:00.834 --- 10.0.0.3 ping statistics --- 00:08:00.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.834 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:00.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:00.834 00:08:00.834 --- 10.0.0.1 ping statistics --- 00:08:00.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.834 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@437 -- # return 0 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@485 -- # nvmfpid=68451 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@486 -- # waitforlisten 68451 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68451 ']' 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.834 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:00.834 [2024-07-15 12:52:13.136804] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:08:00.834 [2024-07-15 12:52:13.136949] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.834 [2024-07-15 12:52:13.273334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.091 [2024-07-15 12:52:13.360689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.091 [2024-07-15 12:52:13.361058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.091 [2024-07-15 12:52:13.361276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.091 [2024-07-15 12:52:13.361445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.091 [2024-07-15 12:52:13.361497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.091 [2024-07-15 12:52:13.361778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.091 [2024-07-15 12:52:13.362287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.091 [2024-07-15 12:52:13.362304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.091 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.091 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:01.091 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:08:01.091 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.091 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:01.348 [2024-07-15 12:52:13.594424] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:01.348 Malloc0 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:01.348 Delay0 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:01.348 [2024-07-15 12:52:13.689788] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.348 12:52:13 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:01.605 [2024-07-15 12:52:13.866544] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:03.504 Initializing NVMe Controllers 00:08:03.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:03.504 controller IO queue size 128 less than required 00:08:03.504 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:03.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:03.504 Initialization complete. Launching workers. 00:08:03.504 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22911 00:08:03.504 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22972, failed to submit 62 00:08:03.504 success 22915, unsuccess 57, failed 0 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # nvmfcleanup 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:03.504 12:52:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:03.763 rmmod nvme_tcp 00:08:03.763 rmmod nvme_fabrics 00:08:03.763 rmmod nvme_keyring 00:08:03.763 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@493 -- # '[' -n 68451 ']' 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@494 -- # killprocess 68451 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68451 ']' 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68451 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68451 00:08:03.764 killing process with pid 68451 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68451' 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68451 00:08:03.764 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68451 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@282 -- # remove_spdk_ns 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:08:04.024 00:08:04.024 real 0m3.700s 00:08:04.024 user 0m10.249s 00:08:04.024 sys 0m1.125s 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.024 12:52:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:04.024 ************************************ 00:08:04.024 END TEST nvmf_abort 00:08:04.024 ************************************ 00:08:04.024 12:52:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:04.024 12:52:16 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:04.024 12:52:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.024 12:52:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.024 12:52:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:04.024 ************************************ 00:08:04.024 START TEST nvmf_ns_hotplug_stress 00:08:04.024 ************************************ 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:04.024 * Looking for test storage... 00:08:04.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.024 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@452 -- # prepare_net_devs 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # local -g is_hw=no 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # remove_spdk_ns 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # nvmf_veth_init 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:08:04.024 Cannot find device "nvmf_tgt_br" 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:08:04.024 Cannot find device "nvmf_tgt_br2" 00:08:04.024 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # true 00:08:04.025 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:08:04.025 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:08:04.025 Cannot find device "nvmf_tgt_br" 00:08:04.025 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:04.025 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:08:04.025 Cannot find device "nvmf_tgt_br2" 00:08:04.025 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:04.025 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:04.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:04.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:08:04.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:08:04.283 00:08:04.283 --- 10.0.0.2 ping statistics --- 00:08:04.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.283 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:08:04.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:04.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:08:04.283 00:08:04.283 --- 10.0.0.3 ping statistics --- 00:08:04.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.283 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:04.283 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:04.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:08:04.542 00:08:04.542 --- 10.0.0.1 ping statistics --- 00:08:04.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.542 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@437 -- # return 0 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:04.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # nvmfpid=68674 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # waitforlisten 68674 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68674 ']' 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:04.542 12:52:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:04.542 [2024-07-15 12:52:16.853854] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:08:04.542 [2024-07-15 12:52:16.854245] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.542 [2024-07-15 12:52:17.002221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.800 [2024-07-15 12:52:17.089803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.800 [2024-07-15 12:52:17.089863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.800 [2024-07-15 12:52:17.089880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.800 [2024-07-15 12:52:17.089893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.800 [2024-07-15 12:52:17.089904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.800 [2024-07-15 12:52:17.090000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.800 [2024-07-15 12:52:17.090088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.800 [2024-07-15 12:52:17.090103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.731 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.731 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:05.731 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:08:05.731 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.731 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:05.731 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.731 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:05.731 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:05.989 [2024-07-15 12:52:18.334682] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.989 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:06.246 12:52:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.810 [2024-07-15 12:52:19.015380] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.810 12:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.068 12:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:07.324 Malloc0 00:08:07.324 12:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:07.890 Delay0 00:08:07.890 12:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.148 12:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:08.429 NULL1 00:08:08.429 12:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:09.017 12:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68822 00:08:09.017 12:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:09.017 12:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:09.017 12:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.950 Read completed with error (sct=0, sc=11) 00:08:10.209 12:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.467 12:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:10.467 12:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:11.032 true 00:08:11.032 12:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:11.032 12:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.596 12:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.853 12:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:11.853 12:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:12.418 true 00:08:12.418 12:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:12.418 12:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.984 12:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.549 12:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:13.549 12:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:13.807 true 00:08:13.807 12:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:13.807 12:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.239 12:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.497 12:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:15.497 12:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:15.756 true 00:08:15.756 12:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:15.756 12:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.581 12:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.840 12:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:16.840 12:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:17.098 true 00:08:17.098 12:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:17.098 12:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.031 12:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.031 12:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:18.031 12:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:18.597 true 00:08:18.597 12:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:18.597 12:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.855 12:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.422 12:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:19.422 12:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:19.679 true 00:08:19.679 12:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:19.679 12:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.936 12:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.194 12:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:20.194 12:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:20.452 true 00:08:20.452 12:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:20.452 12:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.710 12:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.968 12:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:20.968 12:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:21.225 true 00:08:21.225 12:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:21.225 12:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.161 12:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.726 12:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:22.726 12:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:22.984 true 00:08:22.984 12:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:22.984 12:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.354 12:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.611 12:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:24.611 12:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:24.880 true 00:08:24.880 12:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:24.880 12:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.443 12:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.995 12:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:25.995 12:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:26.252 true 00:08:26.252 12:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:26.252 12:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.817 12:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.074 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.336 12:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:27.336 12:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:27.604 true 00:08:27.604 12:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:27.604 12:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 12:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.503 12:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:29.503 12:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:30.068 true 00:08:30.068 12:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:30.068 12:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.633 12:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.892 12:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:30.892 12:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:31.457 true 00:08:31.457 12:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:31.457 12:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.022 12:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.340 12:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:32.340 12:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:32.920 true 00:08:32.920 12:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:32.920 12:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.179 12:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.438 12:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:33.438 12:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:34.003 true 00:08:34.003 12:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:34.003 12:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.378 12:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.636 12:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:35.636 12:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:35.895 true 00:08:35.895 12:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:35.895 12:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.830 12:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.089 12:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:37.089 12:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:37.347 true 00:08:37.347 12:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:37.347 12:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.173 12:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.431 12:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:38.431 12:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:38.997 true 00:08:38.997 12:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:38.997 12:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.372 Initializing NVMe Controllers 00:08:40.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:40.372 Controller IO queue size 128, less than required. 00:08:40.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.372 Controller IO queue size 128, less than required. 00:08:40.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:40.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:40.372 Initialization complete. Launching workers. 00:08:40.372 ======================================================== 00:08:40.372 Latency(us) 00:08:40.372 Device Information : IOPS MiB/s Average min max 00:08:40.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3335.60 1.63 28793.03 3047.90 1096897.61 00:08:40.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10489.59 5.12 12202.18 3529.16 744989.94 00:08:40.372 ======================================================== 00:08:40.372 Total : 13825.19 6.75 16205.04 3047.90 1096897.61 00:08:40.372 00:08:40.372 12:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.629 12:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:40.629 12:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:40.888 true 00:08:40.888 12:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68822 00:08:40.888 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68822) - No such process 00:08:40.888 12:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68822 00:08:40.888 12:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.145 12:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:41.715 12:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:41.715 12:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:41.715 12:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:41.715 12:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:41.715 12:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:41.972 null0 00:08:41.972 12:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:41.972 12:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:41.972 12:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:42.229 null1 00:08:42.229 12:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:42.229 12:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:42.229 12:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:42.486 null2 00:08:42.486 12:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:42.486 12:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:42.486 12:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:43.050 null3 00:08:43.050 12:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.050 12:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.050 12:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:43.307 null4 00:08:43.563 12:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.563 12:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.563 12:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:43.820 null5 00:08:43.820 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.820 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.820 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:44.079 null6 00:08:44.079 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.079 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.079 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:44.644 null7 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:44.644 12:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69707 69708 69710 69712 69713 69716 69717 69718 00:08:44.902 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:44.902 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:44.902 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:44.902 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:44.902 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.902 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:45.161 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:45.161 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.419 12:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:45.678 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.678 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.678 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:45.678 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.678 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.678 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:45.954 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:45.954 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.954 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:45.954 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:45.954 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.220 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:46.220 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.220 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.220 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.220 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.220 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.478 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.736 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.736 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.736 12:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.736 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.736 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.736 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.736 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.736 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.736 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.736 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.995 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:46.995 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.995 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.995 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.253 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.253 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:47.253 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:47.253 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.253 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.253 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.253 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.253 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.253 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.511 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.511 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.511 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.511 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.511 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.511 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.511 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.511 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.511 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.769 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.769 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.769 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.769 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.769 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.769 12:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.769 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.769 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.769 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.769 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.769 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.027 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.027 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:48.027 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:48.027 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.285 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:48.285 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.285 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.285 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:48.285 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:48.285 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.285 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.285 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:48.542 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.542 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.543 12:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:48.801 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.801 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.801 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.801 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:48.801 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:48.801 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.059 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.316 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:49.575 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.575 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.575 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:49.575 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:49.575 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.575 12:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:49.575 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.575 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.575 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:49.834 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:49.834 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.834 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.834 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.834 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:49.834 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:49.834 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.834 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.834 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:50.092 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:50.092 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.092 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.092 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:50.092 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:50.092 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.092 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.092 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.350 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:50.609 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.609 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.609 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.609 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:50.609 12:53:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:50.609 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.609 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.609 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:50.868 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:50.868 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.868 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.868 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.868 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:50.868 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:50.868 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.868 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.868 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:51.128 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.128 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.128 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:51.128 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:51.128 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:51.128 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:51.385 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:51.643 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.643 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.643 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.643 12:53:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:51.643 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.643 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.643 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:51.643 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.643 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.643 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.643 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:51.901 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:51.901 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:51.901 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.901 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.901 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:52.159 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:52.159 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:52.159 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.159 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.159 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:52.159 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.159 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.159 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:52.159 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.418 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:52.675 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:52.675 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:52.675 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.675 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.675 12:53:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:52.675 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.675 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.675 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:52.676 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:52.936 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:52.936 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:52.936 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.936 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.936 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:52.936 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:52.936 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.936 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:52.936 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:53.194 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:53.452 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.452 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.452 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.452 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:53.452 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.452 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.452 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.711 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.711 12:53:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.711 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.711 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:53.711 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.711 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.711 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.711 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.969 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.969 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.969 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:53.969 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.969 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.969 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.969 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.227 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.227 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.227 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:54.227 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:54.227 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # nvmfcleanup 00:08:54.227 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:54.227 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.227 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:54.227 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.228 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.228 rmmod nvme_tcp 00:08:54.228 rmmod nvme_fabrics 00:08:54.228 rmmod nvme_keyring 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # '[' -n 68674 ']' 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # killprocess 68674 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68674 ']' 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68674 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68674 00:08:54.487 killing process with pid 68674 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68674' 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68674 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68674 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@282 -- # remove_spdk_ns 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:08:54.487 00:08:54.487 real 0m50.627s 00:08:54.487 user 4m13.789s 00:08:54.487 sys 0m16.235s 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.487 ************************************ 00:08:54.487 12:53:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.487 END TEST nvmf_ns_hotplug_stress 00:08:54.487 ************************************ 00:08:54.746 12:53:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:54.746 12:53:06 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:54.746 12:53:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:54.746 12:53:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.746 12:53:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.746 ************************************ 00:08:54.746 START TEST nvmf_connect_stress 00:08:54.746 ************************************ 00:08:54.746 12:53:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:54.746 * Looking for test storage... 00:08:54.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.747 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@452 -- # prepare_net_devs 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # local -g is_hw=no 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # remove_spdk_ns 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@436 -- # nvmf_veth_init 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:08:54.747 Cannot find device "nvmf_tgt_br" 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.747 Cannot find device "nvmf_tgt_br2" 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # true 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:08:54.747 Cannot find device "nvmf_tgt_br" 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:08:54.747 Cannot find device "nvmf_tgt_br2" 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:08:54.747 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:55.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:55.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:08:55.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:08:55.027 00:08:55.027 --- 10.0.0.2 ping statistics --- 00:08:55.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.027 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:08:55.027 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:55.027 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:08:55.027 00:08:55.027 --- 10.0.0.3 ping statistics --- 00:08:55.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.027 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:55.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:55.027 00:08:55.027 --- 10.0.0.1 ping statistics --- 00:08:55.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.027 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@437 -- # return 0 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:08:55.027 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@485 -- # nvmfpid=71074 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@486 -- # waitforlisten 71074 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 71074 ']' 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.289 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:55.289 [2024-07-15 12:53:07.583215] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:08:55.289 [2024-07-15 12:53:07.583338] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.289 [2024-07-15 12:53:07.721596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:55.554 [2024-07-15 12:53:07.810463] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.554 [2024-07-15 12:53:07.810565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.554 [2024-07-15 12:53:07.810588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.554 [2024-07-15 12:53:07.810604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.554 [2024-07-15 12:53:07.810617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.554 [2024-07-15 12:53:07.811196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.554 [2024-07-15 12:53:07.811327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.554 [2024-07-15 12:53:07.811330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.554 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.554 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:55.554 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.555 [2024-07-15 12:53:07.948691] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.555 [2024-07-15 12:53:07.974871] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.555 NULL1 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71111 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.555 12:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.555 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.813 12:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.072 12:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.072 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:56.072 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.072 12:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.072 12:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.330 12:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.330 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:56.330 12:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.330 12:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.330 12:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.588 12:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.588 12:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:56.588 12:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.588 12:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.588 12:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.156 12:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.156 12:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:57.156 12:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.156 12:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.156 12:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.415 12:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.415 12:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:57.415 12:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.415 12:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.415 12:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.673 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.673 12:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:57.673 12:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.673 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.673 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.930 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.930 12:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:57.930 12:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.930 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.930 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.495 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.495 12:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:58.495 12:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:58.495 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.495 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.752 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.752 12:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:58.752 12:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:58.752 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.752 12:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.010 12:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.010 12:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:59.010 12:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:59.010 12:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.010 12:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.269 12:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.269 12:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:59.269 12:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:59.269 12:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.269 12:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.525 12:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.525 12:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:08:59.525 12:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:59.526 12:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.526 12:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.112 12:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.112 12:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:00.112 12:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.112 12:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.112 12:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.369 12:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.369 12:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:00.369 12:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.369 12:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.369 12:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.627 12:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.627 12:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:00.627 12:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.627 12:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.627 12:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.884 12:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.884 12:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:00.884 12:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.884 12:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.884 12:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.140 12:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.140 12:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:01.140 12:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.140 12:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.140 12:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.712 12:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.712 12:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:01.712 12:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.712 12:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.712 12:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.970 12:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.970 12:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:01.970 12:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.970 12:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.970 12:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 12:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.227 12:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:02.227 12:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.227 12:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.227 12:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.485 12:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.485 12:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:02.485 12:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.485 12:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.485 12:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.743 12:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.743 12:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:02.743 12:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.743 12:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.743 12:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.308 12:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.308 12:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:03.308 12:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.308 12:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.308 12:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.566 12:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.566 12:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:03.566 12:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.566 12:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.566 12:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.824 12:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.824 12:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:03.824 12:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.824 12:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.824 12:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.081 12:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.081 12:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:04.081 12:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.081 12:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.081 12:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.647 12:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.647 12:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:04.647 12:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.647 12:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.647 12:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.905 12:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.905 12:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:04.905 12:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.905 12:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.905 12:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.162 12:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.162 12:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:05.162 12:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.162 12:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.162 12:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.419 12:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.419 12:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:05.419 12:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.419 12:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.419 12:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.676 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.676 12:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:05.676 12:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.676 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.676 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.934 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71111 00:09:06.192 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71111) - No such process 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71111 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # nvmfcleanup 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:06.192 rmmod nvme_tcp 00:09:06.192 rmmod nvme_fabrics 00:09:06.192 rmmod nvme_keyring 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@493 -- # '[' -n 71074 ']' 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@494 -- # killprocess 71074 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 71074 ']' 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 71074 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71074 00:09:06.192 killing process with pid 71074 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71074' 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 71074 00:09:06.192 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 71074 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@282 -- # remove_spdk_ns 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:09:06.451 00:09:06.451 real 0m11.752s 00:09:06.451 user 0m38.429s 00:09:06.451 sys 0m3.580s 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.451 ************************************ 00:09:06.451 END TEST nvmf_connect_stress 00:09:06.451 ************************************ 00:09:06.451 12:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.451 12:53:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:06.451 12:53:18 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:06.451 12:53:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:06.451 12:53:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.451 12:53:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.451 ************************************ 00:09:06.451 START TEST nvmf_fused_ordering 00:09:06.451 ************************************ 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:06.451 * Looking for test storage... 00:09:06.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.451 12:53:18 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.452 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@452 -- # prepare_net_devs 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # local -g is_hw=no 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # remove_spdk_ns 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@436 -- # nvmf_veth_init 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:06.452 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:09:06.710 Cannot find device "nvmf_tgt_br" 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.710 Cannot find device "nvmf_tgt_br2" 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # true 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:09:06.710 Cannot find device "nvmf_tgt_br" 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:09:06.710 Cannot find device "nvmf_tgt_br2" 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:09:06.710 12:53:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:09:06.710 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:06.968 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:06.968 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:06.968 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:06.968 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:09:06.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:09:06.968 00:09:06.968 --- 10.0.0.2 ping statistics --- 00:09:06.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.968 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:06.968 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:09:06.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:06.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:06.968 00:09:06.968 --- 10.0.0.3 ping statistics --- 00:09:06.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.968 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:06.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:06.969 00:09:06.969 --- 10.0.0.1 ping statistics --- 00:09:06.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.969 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@437 -- # return 0 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:06.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@485 -- # nvmfpid=71434 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@486 -- # waitforlisten 71434 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71434 ']' 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.969 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:06.969 [2024-07-15 12:53:19.297703] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:06.969 [2024-07-15 12:53:19.297810] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.969 [2024-07-15 12:53:19.431721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.228 [2024-07-15 12:53:19.498991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.228 [2024-07-15 12:53:19.499056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.228 [2024-07-15 12:53:19.499070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.228 [2024-07-15 12:53:19.499081] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.228 [2024-07-15 12:53:19.499089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.228 [2024-07-15 12:53:19.499119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:07.228 [2024-07-15 12:53:19.619841] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:07.228 [2024-07-15 12:53:19.635956] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:07.228 NULL1 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.228 12:53:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:07.228 [2024-07-15 12:53:19.687742] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:07.228 [2024-07-15 12:53:19.687832] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71471 ] 00:09:07.796 Attached to nqn.2016-06.io.spdk:cnode1 00:09:07.796 Namespace ID: 1 size: 1GB 00:09:07.796 fused_ordering(0) 00:09:07.796 fused_ordering(1) 00:09:07.796 fused_ordering(2) 00:09:07.796 fused_ordering(3) 00:09:07.796 fused_ordering(4) 00:09:07.796 fused_ordering(5) 00:09:07.796 fused_ordering(6) 00:09:07.796 fused_ordering(7) 00:09:07.796 fused_ordering(8) 00:09:07.796 fused_ordering(9) 00:09:07.796 fused_ordering(10) 00:09:07.796 fused_ordering(11) 00:09:07.796 fused_ordering(12) 00:09:07.796 fused_ordering(13) 00:09:07.796 fused_ordering(14) 00:09:07.796 fused_ordering(15) 00:09:07.796 fused_ordering(16) 00:09:07.796 fused_ordering(17) 00:09:07.796 fused_ordering(18) 00:09:07.796 fused_ordering(19) 00:09:07.796 fused_ordering(20) 00:09:07.796 fused_ordering(21) 00:09:07.796 fused_ordering(22) 00:09:07.796 fused_ordering(23) 00:09:07.796 fused_ordering(24) 00:09:07.796 fused_ordering(25) 00:09:07.796 fused_ordering(26) 00:09:07.796 fused_ordering(27) 00:09:07.796 fused_ordering(28) 00:09:07.796 fused_ordering(29) 00:09:07.796 fused_ordering(30) 00:09:07.796 fused_ordering(31) 00:09:07.796 fused_ordering(32) 00:09:07.796 fused_ordering(33) 00:09:07.796 fused_ordering(34) 00:09:07.796 fused_ordering(35) 00:09:07.796 fused_ordering(36) 00:09:07.796 fused_ordering(37) 00:09:07.796 fused_ordering(38) 00:09:07.796 fused_ordering(39) 00:09:07.796 fused_ordering(40) 00:09:07.796 fused_ordering(41) 00:09:07.796 fused_ordering(42) 00:09:07.796 fused_ordering(43) 00:09:07.796 fused_ordering(44) 00:09:07.796 fused_ordering(45) 00:09:07.796 fused_ordering(46) 00:09:07.796 fused_ordering(47) 00:09:07.796 fused_ordering(48) 00:09:07.796 fused_ordering(49) 00:09:07.796 fused_ordering(50) 00:09:07.796 fused_ordering(51) 00:09:07.796 fused_ordering(52) 00:09:07.796 fused_ordering(53) 00:09:07.796 fused_ordering(54) 00:09:07.796 fused_ordering(55) 00:09:07.796 fused_ordering(56) 00:09:07.796 fused_ordering(57) 00:09:07.796 fused_ordering(58) 00:09:07.796 fused_ordering(59) 00:09:07.796 fused_ordering(60) 00:09:07.796 fused_ordering(61) 00:09:07.796 fused_ordering(62) 00:09:07.796 fused_ordering(63) 00:09:07.796 fused_ordering(64) 00:09:07.796 fused_ordering(65) 00:09:07.796 fused_ordering(66) 00:09:07.796 fused_ordering(67) 00:09:07.796 fused_ordering(68) 00:09:07.796 fused_ordering(69) 00:09:07.796 fused_ordering(70) 00:09:07.796 fused_ordering(71) 00:09:07.796 fused_ordering(72) 00:09:07.796 fused_ordering(73) 00:09:07.796 fused_ordering(74) 00:09:07.796 fused_ordering(75) 00:09:07.796 fused_ordering(76) 00:09:07.796 fused_ordering(77) 00:09:07.796 fused_ordering(78) 00:09:07.796 fused_ordering(79) 00:09:07.796 fused_ordering(80) 00:09:07.796 fused_ordering(81) 00:09:07.796 fused_ordering(82) 00:09:07.796 fused_ordering(83) 00:09:07.796 fused_ordering(84) 00:09:07.796 fused_ordering(85) 00:09:07.796 fused_ordering(86) 00:09:07.796 fused_ordering(87) 00:09:07.796 fused_ordering(88) 00:09:07.796 fused_ordering(89) 00:09:07.796 fused_ordering(90) 00:09:07.796 fused_ordering(91) 00:09:07.796 fused_ordering(92) 00:09:07.796 fused_ordering(93) 00:09:07.796 fused_ordering(94) 00:09:07.796 fused_ordering(95) 00:09:07.796 fused_ordering(96) 00:09:07.796 fused_ordering(97) 00:09:07.796 fused_ordering(98) 00:09:07.796 fused_ordering(99) 00:09:07.796 fused_ordering(100) 00:09:07.796 fused_ordering(101) 00:09:07.796 fused_ordering(102) 00:09:07.796 fused_ordering(103) 00:09:07.796 fused_ordering(104) 00:09:07.796 fused_ordering(105) 00:09:07.796 fused_ordering(106) 00:09:07.796 fused_ordering(107) 00:09:07.796 fused_ordering(108) 00:09:07.796 fused_ordering(109) 00:09:07.796 fused_ordering(110) 00:09:07.796 fused_ordering(111) 00:09:07.796 fused_ordering(112) 00:09:07.796 fused_ordering(113) 00:09:07.796 fused_ordering(114) 00:09:07.796 fused_ordering(115) 00:09:07.796 fused_ordering(116) 00:09:07.796 fused_ordering(117) 00:09:07.796 fused_ordering(118) 00:09:07.796 fused_ordering(119) 00:09:07.796 fused_ordering(120) 00:09:07.796 fused_ordering(121) 00:09:07.796 fused_ordering(122) 00:09:07.796 fused_ordering(123) 00:09:07.796 fused_ordering(124) 00:09:07.796 fused_ordering(125) 00:09:07.796 fused_ordering(126) 00:09:07.796 fused_ordering(127) 00:09:07.796 fused_ordering(128) 00:09:07.796 fused_ordering(129) 00:09:07.796 fused_ordering(130) 00:09:07.796 fused_ordering(131) 00:09:07.796 fused_ordering(132) 00:09:07.796 fused_ordering(133) 00:09:07.796 fused_ordering(134) 00:09:07.796 fused_ordering(135) 00:09:07.796 fused_ordering(136) 00:09:07.796 fused_ordering(137) 00:09:07.796 fused_ordering(138) 00:09:07.796 fused_ordering(139) 00:09:07.796 fused_ordering(140) 00:09:07.796 fused_ordering(141) 00:09:07.796 fused_ordering(142) 00:09:07.796 fused_ordering(143) 00:09:07.796 fused_ordering(144) 00:09:07.796 fused_ordering(145) 00:09:07.796 fused_ordering(146) 00:09:07.796 fused_ordering(147) 00:09:07.796 fused_ordering(148) 00:09:07.796 fused_ordering(149) 00:09:07.796 fused_ordering(150) 00:09:07.796 fused_ordering(151) 00:09:07.796 fused_ordering(152) 00:09:07.796 fused_ordering(153) 00:09:07.796 fused_ordering(154) 00:09:07.796 fused_ordering(155) 00:09:07.796 fused_ordering(156) 00:09:07.796 fused_ordering(157) 00:09:07.796 fused_ordering(158) 00:09:07.796 fused_ordering(159) 00:09:07.796 fused_ordering(160) 00:09:07.796 fused_ordering(161) 00:09:07.796 fused_ordering(162) 00:09:07.796 fused_ordering(163) 00:09:07.796 fused_ordering(164) 00:09:07.796 fused_ordering(165) 00:09:07.796 fused_ordering(166) 00:09:07.796 fused_ordering(167) 00:09:07.796 fused_ordering(168) 00:09:07.796 fused_ordering(169) 00:09:07.796 fused_ordering(170) 00:09:07.796 fused_ordering(171) 00:09:07.796 fused_ordering(172) 00:09:07.796 fused_ordering(173) 00:09:07.796 fused_ordering(174) 00:09:07.796 fused_ordering(175) 00:09:07.796 fused_ordering(176) 00:09:07.796 fused_ordering(177) 00:09:07.796 fused_ordering(178) 00:09:07.796 fused_ordering(179) 00:09:07.796 fused_ordering(180) 00:09:07.796 fused_ordering(181) 00:09:07.796 fused_ordering(182) 00:09:07.796 fused_ordering(183) 00:09:07.796 fused_ordering(184) 00:09:07.796 fused_ordering(185) 00:09:07.796 fused_ordering(186) 00:09:07.796 fused_ordering(187) 00:09:07.796 fused_ordering(188) 00:09:07.796 fused_ordering(189) 00:09:07.796 fused_ordering(190) 00:09:07.796 fused_ordering(191) 00:09:07.796 fused_ordering(192) 00:09:07.796 fused_ordering(193) 00:09:07.796 fused_ordering(194) 00:09:07.796 fused_ordering(195) 00:09:07.796 fused_ordering(196) 00:09:07.796 fused_ordering(197) 00:09:07.796 fused_ordering(198) 00:09:07.796 fused_ordering(199) 00:09:07.796 fused_ordering(200) 00:09:07.796 fused_ordering(201) 00:09:07.796 fused_ordering(202) 00:09:07.796 fused_ordering(203) 00:09:07.796 fused_ordering(204) 00:09:07.796 fused_ordering(205) 00:09:08.055 fused_ordering(206) 00:09:08.055 fused_ordering(207) 00:09:08.055 fused_ordering(208) 00:09:08.055 fused_ordering(209) 00:09:08.055 fused_ordering(210) 00:09:08.055 fused_ordering(211) 00:09:08.055 fused_ordering(212) 00:09:08.055 fused_ordering(213) 00:09:08.055 fused_ordering(214) 00:09:08.055 fused_ordering(215) 00:09:08.055 fused_ordering(216) 00:09:08.055 fused_ordering(217) 00:09:08.055 fused_ordering(218) 00:09:08.055 fused_ordering(219) 00:09:08.055 fused_ordering(220) 00:09:08.055 fused_ordering(221) 00:09:08.055 fused_ordering(222) 00:09:08.055 fused_ordering(223) 00:09:08.055 fused_ordering(224) 00:09:08.055 fused_ordering(225) 00:09:08.055 fused_ordering(226) 00:09:08.055 fused_ordering(227) 00:09:08.055 fused_ordering(228) 00:09:08.055 fused_ordering(229) 00:09:08.055 fused_ordering(230) 00:09:08.055 fused_ordering(231) 00:09:08.055 fused_ordering(232) 00:09:08.055 fused_ordering(233) 00:09:08.055 fused_ordering(234) 00:09:08.055 fused_ordering(235) 00:09:08.055 fused_ordering(236) 00:09:08.055 fused_ordering(237) 00:09:08.055 fused_ordering(238) 00:09:08.055 fused_ordering(239) 00:09:08.055 fused_ordering(240) 00:09:08.055 fused_ordering(241) 00:09:08.055 fused_ordering(242) 00:09:08.055 fused_ordering(243) 00:09:08.055 fused_ordering(244) 00:09:08.055 fused_ordering(245) 00:09:08.055 fused_ordering(246) 00:09:08.055 fused_ordering(247) 00:09:08.055 fused_ordering(248) 00:09:08.055 fused_ordering(249) 00:09:08.055 fused_ordering(250) 00:09:08.055 fused_ordering(251) 00:09:08.055 fused_ordering(252) 00:09:08.055 fused_ordering(253) 00:09:08.055 fused_ordering(254) 00:09:08.055 fused_ordering(255) 00:09:08.055 fused_ordering(256) 00:09:08.055 fused_ordering(257) 00:09:08.055 fused_ordering(258) 00:09:08.055 fused_ordering(259) 00:09:08.055 fused_ordering(260) 00:09:08.055 fused_ordering(261) 00:09:08.055 fused_ordering(262) 00:09:08.055 fused_ordering(263) 00:09:08.055 fused_ordering(264) 00:09:08.055 fused_ordering(265) 00:09:08.055 fused_ordering(266) 00:09:08.055 fused_ordering(267) 00:09:08.055 fused_ordering(268) 00:09:08.055 fused_ordering(269) 00:09:08.055 fused_ordering(270) 00:09:08.055 fused_ordering(271) 00:09:08.055 fused_ordering(272) 00:09:08.055 fused_ordering(273) 00:09:08.055 fused_ordering(274) 00:09:08.055 fused_ordering(275) 00:09:08.055 fused_ordering(276) 00:09:08.055 fused_ordering(277) 00:09:08.055 fused_ordering(278) 00:09:08.055 fused_ordering(279) 00:09:08.055 fused_ordering(280) 00:09:08.055 fused_ordering(281) 00:09:08.055 fused_ordering(282) 00:09:08.055 fused_ordering(283) 00:09:08.055 fused_ordering(284) 00:09:08.055 fused_ordering(285) 00:09:08.055 fused_ordering(286) 00:09:08.055 fused_ordering(287) 00:09:08.055 fused_ordering(288) 00:09:08.055 fused_ordering(289) 00:09:08.055 fused_ordering(290) 00:09:08.055 fused_ordering(291) 00:09:08.055 fused_ordering(292) 00:09:08.055 fused_ordering(293) 00:09:08.055 fused_ordering(294) 00:09:08.055 fused_ordering(295) 00:09:08.055 fused_ordering(296) 00:09:08.055 fused_ordering(297) 00:09:08.055 fused_ordering(298) 00:09:08.055 fused_ordering(299) 00:09:08.055 fused_ordering(300) 00:09:08.055 fused_ordering(301) 00:09:08.055 fused_ordering(302) 00:09:08.055 fused_ordering(303) 00:09:08.055 fused_ordering(304) 00:09:08.055 fused_ordering(305) 00:09:08.055 fused_ordering(306) 00:09:08.055 fused_ordering(307) 00:09:08.055 fused_ordering(308) 00:09:08.055 fused_ordering(309) 00:09:08.055 fused_ordering(310) 00:09:08.055 fused_ordering(311) 00:09:08.055 fused_ordering(312) 00:09:08.055 fused_ordering(313) 00:09:08.055 fused_ordering(314) 00:09:08.055 fused_ordering(315) 00:09:08.055 fused_ordering(316) 00:09:08.055 fused_ordering(317) 00:09:08.055 fused_ordering(318) 00:09:08.056 fused_ordering(319) 00:09:08.056 fused_ordering(320) 00:09:08.056 fused_ordering(321) 00:09:08.056 fused_ordering(322) 00:09:08.056 fused_ordering(323) 00:09:08.056 fused_ordering(324) 00:09:08.056 fused_ordering(325) 00:09:08.056 fused_ordering(326) 00:09:08.056 fused_ordering(327) 00:09:08.056 fused_ordering(328) 00:09:08.056 fused_ordering(329) 00:09:08.056 fused_ordering(330) 00:09:08.056 fused_ordering(331) 00:09:08.056 fused_ordering(332) 00:09:08.056 fused_ordering(333) 00:09:08.056 fused_ordering(334) 00:09:08.056 fused_ordering(335) 00:09:08.056 fused_ordering(336) 00:09:08.056 fused_ordering(337) 00:09:08.056 fused_ordering(338) 00:09:08.056 fused_ordering(339) 00:09:08.056 fused_ordering(340) 00:09:08.056 fused_ordering(341) 00:09:08.056 fused_ordering(342) 00:09:08.056 fused_ordering(343) 00:09:08.056 fused_ordering(344) 00:09:08.056 fused_ordering(345) 00:09:08.056 fused_ordering(346) 00:09:08.056 fused_ordering(347) 00:09:08.056 fused_ordering(348) 00:09:08.056 fused_ordering(349) 00:09:08.056 fused_ordering(350) 00:09:08.056 fused_ordering(351) 00:09:08.056 fused_ordering(352) 00:09:08.056 fused_ordering(353) 00:09:08.056 fused_ordering(354) 00:09:08.056 fused_ordering(355) 00:09:08.056 fused_ordering(356) 00:09:08.056 fused_ordering(357) 00:09:08.056 fused_ordering(358) 00:09:08.056 fused_ordering(359) 00:09:08.056 fused_ordering(360) 00:09:08.056 fused_ordering(361) 00:09:08.056 fused_ordering(362) 00:09:08.056 fused_ordering(363) 00:09:08.056 fused_ordering(364) 00:09:08.056 fused_ordering(365) 00:09:08.056 fused_ordering(366) 00:09:08.056 fused_ordering(367) 00:09:08.056 fused_ordering(368) 00:09:08.056 fused_ordering(369) 00:09:08.056 fused_ordering(370) 00:09:08.056 fused_ordering(371) 00:09:08.056 fused_ordering(372) 00:09:08.056 fused_ordering(373) 00:09:08.056 fused_ordering(374) 00:09:08.056 fused_ordering(375) 00:09:08.056 fused_ordering(376) 00:09:08.056 fused_ordering(377) 00:09:08.056 fused_ordering(378) 00:09:08.056 fused_ordering(379) 00:09:08.056 fused_ordering(380) 00:09:08.056 fused_ordering(381) 00:09:08.056 fused_ordering(382) 00:09:08.056 fused_ordering(383) 00:09:08.056 fused_ordering(384) 00:09:08.056 fused_ordering(385) 00:09:08.056 fused_ordering(386) 00:09:08.056 fused_ordering(387) 00:09:08.056 fused_ordering(388) 00:09:08.056 fused_ordering(389) 00:09:08.056 fused_ordering(390) 00:09:08.056 fused_ordering(391) 00:09:08.056 fused_ordering(392) 00:09:08.056 fused_ordering(393) 00:09:08.056 fused_ordering(394) 00:09:08.056 fused_ordering(395) 00:09:08.056 fused_ordering(396) 00:09:08.056 fused_ordering(397) 00:09:08.056 fused_ordering(398) 00:09:08.056 fused_ordering(399) 00:09:08.056 fused_ordering(400) 00:09:08.056 fused_ordering(401) 00:09:08.056 fused_ordering(402) 00:09:08.056 fused_ordering(403) 00:09:08.056 fused_ordering(404) 00:09:08.056 fused_ordering(405) 00:09:08.056 fused_ordering(406) 00:09:08.056 fused_ordering(407) 00:09:08.056 fused_ordering(408) 00:09:08.056 fused_ordering(409) 00:09:08.056 fused_ordering(410) 00:09:08.621 fused_ordering(411) 00:09:08.621 fused_ordering(412) 00:09:08.621 fused_ordering(413) 00:09:08.621 fused_ordering(414) 00:09:08.621 fused_ordering(415) 00:09:08.621 fused_ordering(416) 00:09:08.621 fused_ordering(417) 00:09:08.621 fused_ordering(418) 00:09:08.621 fused_ordering(419) 00:09:08.621 fused_ordering(420) 00:09:08.621 fused_ordering(421) 00:09:08.621 fused_ordering(422) 00:09:08.621 fused_ordering(423) 00:09:08.621 fused_ordering(424) 00:09:08.621 fused_ordering(425) 00:09:08.621 fused_ordering(426) 00:09:08.621 fused_ordering(427) 00:09:08.621 fused_ordering(428) 00:09:08.621 fused_ordering(429) 00:09:08.621 fused_ordering(430) 00:09:08.621 fused_ordering(431) 00:09:08.621 fused_ordering(432) 00:09:08.621 fused_ordering(433) 00:09:08.621 fused_ordering(434) 00:09:08.621 fused_ordering(435) 00:09:08.621 fused_ordering(436) 00:09:08.621 fused_ordering(437) 00:09:08.621 fused_ordering(438) 00:09:08.621 fused_ordering(439) 00:09:08.621 fused_ordering(440) 00:09:08.621 fused_ordering(441) 00:09:08.621 fused_ordering(442) 00:09:08.621 fused_ordering(443) 00:09:08.621 fused_ordering(444) 00:09:08.621 fused_ordering(445) 00:09:08.621 fused_ordering(446) 00:09:08.621 fused_ordering(447) 00:09:08.621 fused_ordering(448) 00:09:08.621 fused_ordering(449) 00:09:08.621 fused_ordering(450) 00:09:08.621 fused_ordering(451) 00:09:08.621 fused_ordering(452) 00:09:08.621 fused_ordering(453) 00:09:08.622 fused_ordering(454) 00:09:08.622 fused_ordering(455) 00:09:08.622 fused_ordering(456) 00:09:08.622 fused_ordering(457) 00:09:08.622 fused_ordering(458) 00:09:08.622 fused_ordering(459) 00:09:08.622 fused_ordering(460) 00:09:08.622 fused_ordering(461) 00:09:08.622 fused_ordering(462) 00:09:08.622 fused_ordering(463) 00:09:08.622 fused_ordering(464) 00:09:08.622 fused_ordering(465) 00:09:08.622 fused_ordering(466) 00:09:08.622 fused_ordering(467) 00:09:08.622 fused_ordering(468) 00:09:08.622 fused_ordering(469) 00:09:08.622 fused_ordering(470) 00:09:08.622 fused_ordering(471) 00:09:08.622 fused_ordering(472) 00:09:08.622 fused_ordering(473) 00:09:08.622 fused_ordering(474) 00:09:08.622 fused_ordering(475) 00:09:08.622 fused_ordering(476) 00:09:08.622 fused_ordering(477) 00:09:08.622 fused_ordering(478) 00:09:08.622 fused_ordering(479) 00:09:08.622 fused_ordering(480) 00:09:08.622 fused_ordering(481) 00:09:08.622 fused_ordering(482) 00:09:08.622 fused_ordering(483) 00:09:08.622 fused_ordering(484) 00:09:08.622 fused_ordering(485) 00:09:08.622 fused_ordering(486) 00:09:08.622 fused_ordering(487) 00:09:08.622 fused_ordering(488) 00:09:08.622 fused_ordering(489) 00:09:08.622 fused_ordering(490) 00:09:08.622 fused_ordering(491) 00:09:08.622 fused_ordering(492) 00:09:08.622 fused_ordering(493) 00:09:08.622 fused_ordering(494) 00:09:08.622 fused_ordering(495) 00:09:08.622 fused_ordering(496) 00:09:08.622 fused_ordering(497) 00:09:08.622 fused_ordering(498) 00:09:08.622 fused_ordering(499) 00:09:08.622 fused_ordering(500) 00:09:08.622 fused_ordering(501) 00:09:08.622 fused_ordering(502) 00:09:08.622 fused_ordering(503) 00:09:08.622 fused_ordering(504) 00:09:08.622 fused_ordering(505) 00:09:08.622 fused_ordering(506) 00:09:08.622 fused_ordering(507) 00:09:08.622 fused_ordering(508) 00:09:08.622 fused_ordering(509) 00:09:08.622 fused_ordering(510) 00:09:08.622 fused_ordering(511) 00:09:08.622 fused_ordering(512) 00:09:08.622 fused_ordering(513) 00:09:08.622 fused_ordering(514) 00:09:08.622 fused_ordering(515) 00:09:08.622 fused_ordering(516) 00:09:08.622 fused_ordering(517) 00:09:08.622 fused_ordering(518) 00:09:08.622 fused_ordering(519) 00:09:08.622 fused_ordering(520) 00:09:08.622 fused_ordering(521) 00:09:08.622 fused_ordering(522) 00:09:08.622 fused_ordering(523) 00:09:08.622 fused_ordering(524) 00:09:08.622 fused_ordering(525) 00:09:08.622 fused_ordering(526) 00:09:08.622 fused_ordering(527) 00:09:08.622 fused_ordering(528) 00:09:08.622 fused_ordering(529) 00:09:08.622 fused_ordering(530) 00:09:08.622 fused_ordering(531) 00:09:08.622 fused_ordering(532) 00:09:08.622 fused_ordering(533) 00:09:08.622 fused_ordering(534) 00:09:08.622 fused_ordering(535) 00:09:08.622 fused_ordering(536) 00:09:08.622 fused_ordering(537) 00:09:08.622 fused_ordering(538) 00:09:08.622 fused_ordering(539) 00:09:08.622 fused_ordering(540) 00:09:08.622 fused_ordering(541) 00:09:08.622 fused_ordering(542) 00:09:08.622 fused_ordering(543) 00:09:08.622 fused_ordering(544) 00:09:08.622 fused_ordering(545) 00:09:08.622 fused_ordering(546) 00:09:08.622 fused_ordering(547) 00:09:08.622 fused_ordering(548) 00:09:08.622 fused_ordering(549) 00:09:08.622 fused_ordering(550) 00:09:08.622 fused_ordering(551) 00:09:08.622 fused_ordering(552) 00:09:08.622 fused_ordering(553) 00:09:08.622 fused_ordering(554) 00:09:08.622 fused_ordering(555) 00:09:08.622 fused_ordering(556) 00:09:08.622 fused_ordering(557) 00:09:08.622 fused_ordering(558) 00:09:08.622 fused_ordering(559) 00:09:08.622 fused_ordering(560) 00:09:08.622 fused_ordering(561) 00:09:08.622 fused_ordering(562) 00:09:08.622 fused_ordering(563) 00:09:08.622 fused_ordering(564) 00:09:08.622 fused_ordering(565) 00:09:08.622 fused_ordering(566) 00:09:08.622 fused_ordering(567) 00:09:08.622 fused_ordering(568) 00:09:08.622 fused_ordering(569) 00:09:08.622 fused_ordering(570) 00:09:08.622 fused_ordering(571) 00:09:08.622 fused_ordering(572) 00:09:08.622 fused_ordering(573) 00:09:08.622 fused_ordering(574) 00:09:08.622 fused_ordering(575) 00:09:08.622 fused_ordering(576) 00:09:08.622 fused_ordering(577) 00:09:08.622 fused_ordering(578) 00:09:08.622 fused_ordering(579) 00:09:08.622 fused_ordering(580) 00:09:08.622 fused_ordering(581) 00:09:08.622 fused_ordering(582) 00:09:08.622 fused_ordering(583) 00:09:08.622 fused_ordering(584) 00:09:08.622 fused_ordering(585) 00:09:08.622 fused_ordering(586) 00:09:08.622 fused_ordering(587) 00:09:08.622 fused_ordering(588) 00:09:08.622 fused_ordering(589) 00:09:08.622 fused_ordering(590) 00:09:08.622 fused_ordering(591) 00:09:08.622 fused_ordering(592) 00:09:08.622 fused_ordering(593) 00:09:08.622 fused_ordering(594) 00:09:08.622 fused_ordering(595) 00:09:08.622 fused_ordering(596) 00:09:08.622 fused_ordering(597) 00:09:08.622 fused_ordering(598) 00:09:08.622 fused_ordering(599) 00:09:08.622 fused_ordering(600) 00:09:08.622 fused_ordering(601) 00:09:08.622 fused_ordering(602) 00:09:08.622 fused_ordering(603) 00:09:08.622 fused_ordering(604) 00:09:08.622 fused_ordering(605) 00:09:08.622 fused_ordering(606) 00:09:08.622 fused_ordering(607) 00:09:08.622 fused_ordering(608) 00:09:08.622 fused_ordering(609) 00:09:08.622 fused_ordering(610) 00:09:08.622 fused_ordering(611) 00:09:08.622 fused_ordering(612) 00:09:08.622 fused_ordering(613) 00:09:08.622 fused_ordering(614) 00:09:08.622 fused_ordering(615) 00:09:08.880 fused_ordering(616) 00:09:08.880 fused_ordering(617) 00:09:08.880 fused_ordering(618) 00:09:08.880 fused_ordering(619) 00:09:08.880 fused_ordering(620) 00:09:08.880 fused_ordering(621) 00:09:08.880 fused_ordering(622) 00:09:08.880 fused_ordering(623) 00:09:08.880 fused_ordering(624) 00:09:08.880 fused_ordering(625) 00:09:08.880 fused_ordering(626) 00:09:08.881 fused_ordering(627) 00:09:08.881 fused_ordering(628) 00:09:08.881 fused_ordering(629) 00:09:08.881 fused_ordering(630) 00:09:08.881 fused_ordering(631) 00:09:08.881 fused_ordering(632) 00:09:08.881 fused_ordering(633) 00:09:08.881 fused_ordering(634) 00:09:08.881 fused_ordering(635) 00:09:08.881 fused_ordering(636) 00:09:08.881 fused_ordering(637) 00:09:08.881 fused_ordering(638) 00:09:08.881 fused_ordering(639) 00:09:08.881 fused_ordering(640) 00:09:08.881 fused_ordering(641) 00:09:08.881 fused_ordering(642) 00:09:08.881 fused_ordering(643) 00:09:08.881 fused_ordering(644) 00:09:08.881 fused_ordering(645) 00:09:08.881 fused_ordering(646) 00:09:08.881 fused_ordering(647) 00:09:08.881 fused_ordering(648) 00:09:08.881 fused_ordering(649) 00:09:08.881 fused_ordering(650) 00:09:08.881 fused_ordering(651) 00:09:08.881 fused_ordering(652) 00:09:08.881 fused_ordering(653) 00:09:08.881 fused_ordering(654) 00:09:08.881 fused_ordering(655) 00:09:08.881 fused_ordering(656) 00:09:08.881 fused_ordering(657) 00:09:08.881 fused_ordering(658) 00:09:08.881 fused_ordering(659) 00:09:08.881 fused_ordering(660) 00:09:08.881 fused_ordering(661) 00:09:08.881 fused_ordering(662) 00:09:08.881 fused_ordering(663) 00:09:08.881 fused_ordering(664) 00:09:08.881 fused_ordering(665) 00:09:08.881 fused_ordering(666) 00:09:08.881 fused_ordering(667) 00:09:08.881 fused_ordering(668) 00:09:08.881 fused_ordering(669) 00:09:08.881 fused_ordering(670) 00:09:08.881 fused_ordering(671) 00:09:08.881 fused_ordering(672) 00:09:08.881 fused_ordering(673) 00:09:08.881 fused_ordering(674) 00:09:08.881 fused_ordering(675) 00:09:08.881 fused_ordering(676) 00:09:08.881 fused_ordering(677) 00:09:08.881 fused_ordering(678) 00:09:08.881 fused_ordering(679) 00:09:08.881 fused_ordering(680) 00:09:08.881 fused_ordering(681) 00:09:08.881 fused_ordering(682) 00:09:08.881 fused_ordering(683) 00:09:08.881 fused_ordering(684) 00:09:08.881 fused_ordering(685) 00:09:08.881 fused_ordering(686) 00:09:08.881 fused_ordering(687) 00:09:08.881 fused_ordering(688) 00:09:08.881 fused_ordering(689) 00:09:08.881 fused_ordering(690) 00:09:08.881 fused_ordering(691) 00:09:08.881 fused_ordering(692) 00:09:08.881 fused_ordering(693) 00:09:08.881 fused_ordering(694) 00:09:08.881 fused_ordering(695) 00:09:08.881 fused_ordering(696) 00:09:08.881 fused_ordering(697) 00:09:08.881 fused_ordering(698) 00:09:08.881 fused_ordering(699) 00:09:08.881 fused_ordering(700) 00:09:08.881 fused_ordering(701) 00:09:08.881 fused_ordering(702) 00:09:08.881 fused_ordering(703) 00:09:08.881 fused_ordering(704) 00:09:08.881 fused_ordering(705) 00:09:08.881 fused_ordering(706) 00:09:08.881 fused_ordering(707) 00:09:08.881 fused_ordering(708) 00:09:08.881 fused_ordering(709) 00:09:08.881 fused_ordering(710) 00:09:08.881 fused_ordering(711) 00:09:08.881 fused_ordering(712) 00:09:08.881 fused_ordering(713) 00:09:08.881 fused_ordering(714) 00:09:08.881 fused_ordering(715) 00:09:08.881 fused_ordering(716) 00:09:08.881 fused_ordering(717) 00:09:08.881 fused_ordering(718) 00:09:08.881 fused_ordering(719) 00:09:08.881 fused_ordering(720) 00:09:08.881 fused_ordering(721) 00:09:08.881 fused_ordering(722) 00:09:08.881 fused_ordering(723) 00:09:08.881 fused_ordering(724) 00:09:08.881 fused_ordering(725) 00:09:08.881 fused_ordering(726) 00:09:08.881 fused_ordering(727) 00:09:08.881 fused_ordering(728) 00:09:08.881 fused_ordering(729) 00:09:08.881 fused_ordering(730) 00:09:08.881 fused_ordering(731) 00:09:08.881 fused_ordering(732) 00:09:08.881 fused_ordering(733) 00:09:08.881 fused_ordering(734) 00:09:08.881 fused_ordering(735) 00:09:08.881 fused_ordering(736) 00:09:08.881 fused_ordering(737) 00:09:08.881 fused_ordering(738) 00:09:08.881 fused_ordering(739) 00:09:08.881 fused_ordering(740) 00:09:08.881 fused_ordering(741) 00:09:08.881 fused_ordering(742) 00:09:08.881 fused_ordering(743) 00:09:08.881 fused_ordering(744) 00:09:08.881 fused_ordering(745) 00:09:08.881 fused_ordering(746) 00:09:08.881 fused_ordering(747) 00:09:08.881 fused_ordering(748) 00:09:08.881 fused_ordering(749) 00:09:08.881 fused_ordering(750) 00:09:08.881 fused_ordering(751) 00:09:08.881 fused_ordering(752) 00:09:08.881 fused_ordering(753) 00:09:08.881 fused_ordering(754) 00:09:08.881 fused_ordering(755) 00:09:08.881 fused_ordering(756) 00:09:08.881 fused_ordering(757) 00:09:08.881 fused_ordering(758) 00:09:08.881 fused_ordering(759) 00:09:08.881 fused_ordering(760) 00:09:08.881 fused_ordering(761) 00:09:08.881 fused_ordering(762) 00:09:08.881 fused_ordering(763) 00:09:08.881 fused_ordering(764) 00:09:08.881 fused_ordering(765) 00:09:08.881 fused_ordering(766) 00:09:08.881 fused_ordering(767) 00:09:08.881 fused_ordering(768) 00:09:08.881 fused_ordering(769) 00:09:08.881 fused_ordering(770) 00:09:08.881 fused_ordering(771) 00:09:08.881 fused_ordering(772) 00:09:08.881 fused_ordering(773) 00:09:08.881 fused_ordering(774) 00:09:08.881 fused_ordering(775) 00:09:08.881 fused_ordering(776) 00:09:08.881 fused_ordering(777) 00:09:08.881 fused_ordering(778) 00:09:08.881 fused_ordering(779) 00:09:08.881 fused_ordering(780) 00:09:08.881 fused_ordering(781) 00:09:08.881 fused_ordering(782) 00:09:08.881 fused_ordering(783) 00:09:08.881 fused_ordering(784) 00:09:08.881 fused_ordering(785) 00:09:08.881 fused_ordering(786) 00:09:08.881 fused_ordering(787) 00:09:08.881 fused_ordering(788) 00:09:08.881 fused_ordering(789) 00:09:08.881 fused_ordering(790) 00:09:08.881 fused_ordering(791) 00:09:08.881 fused_ordering(792) 00:09:08.881 fused_ordering(793) 00:09:08.881 fused_ordering(794) 00:09:08.881 fused_ordering(795) 00:09:08.881 fused_ordering(796) 00:09:08.881 fused_ordering(797) 00:09:08.881 fused_ordering(798) 00:09:08.881 fused_ordering(799) 00:09:08.881 fused_ordering(800) 00:09:08.881 fused_ordering(801) 00:09:08.881 fused_ordering(802) 00:09:08.881 fused_ordering(803) 00:09:08.881 fused_ordering(804) 00:09:08.881 fused_ordering(805) 00:09:08.881 fused_ordering(806) 00:09:08.881 fused_ordering(807) 00:09:08.881 fused_ordering(808) 00:09:08.881 fused_ordering(809) 00:09:08.881 fused_ordering(810) 00:09:08.881 fused_ordering(811) 00:09:08.881 fused_ordering(812) 00:09:08.881 fused_ordering(813) 00:09:08.881 fused_ordering(814) 00:09:08.881 fused_ordering(815) 00:09:08.881 fused_ordering(816) 00:09:08.881 fused_ordering(817) 00:09:08.881 fused_ordering(818) 00:09:08.881 fused_ordering(819) 00:09:08.881 fused_ordering(820) 00:09:09.449 fused_ordering(821) 00:09:09.449 fused_ordering(822) 00:09:09.449 fused_ordering(823) 00:09:09.449 fused_ordering(824) 00:09:09.449 fused_ordering(825) 00:09:09.449 fused_ordering(826) 00:09:09.449 fused_ordering(827) 00:09:09.449 fused_ordering(828) 00:09:09.449 fused_ordering(829) 00:09:09.449 fused_ordering(830) 00:09:09.449 fused_ordering(831) 00:09:09.449 fused_ordering(832) 00:09:09.449 fused_ordering(833) 00:09:09.449 fused_ordering(834) 00:09:09.449 fused_ordering(835) 00:09:09.449 fused_ordering(836) 00:09:09.449 fused_ordering(837) 00:09:09.449 fused_ordering(838) 00:09:09.449 fused_ordering(839) 00:09:09.449 fused_ordering(840) 00:09:09.449 fused_ordering(841) 00:09:09.449 fused_ordering(842) 00:09:09.449 fused_ordering(843) 00:09:09.449 fused_ordering(844) 00:09:09.449 fused_ordering(845) 00:09:09.449 fused_ordering(846) 00:09:09.449 fused_ordering(847) 00:09:09.449 fused_ordering(848) 00:09:09.449 fused_ordering(849) 00:09:09.449 fused_ordering(850) 00:09:09.449 fused_ordering(851) 00:09:09.449 fused_ordering(852) 00:09:09.449 fused_ordering(853) 00:09:09.449 fused_ordering(854) 00:09:09.449 fused_ordering(855) 00:09:09.449 fused_ordering(856) 00:09:09.449 fused_ordering(857) 00:09:09.449 fused_ordering(858) 00:09:09.449 fused_ordering(859) 00:09:09.449 fused_ordering(860) 00:09:09.449 fused_ordering(861) 00:09:09.449 fused_ordering(862) 00:09:09.449 fused_ordering(863) 00:09:09.449 fused_ordering(864) 00:09:09.449 fused_ordering(865) 00:09:09.449 fused_ordering(866) 00:09:09.449 fused_ordering(867) 00:09:09.449 fused_ordering(868) 00:09:09.449 fused_ordering(869) 00:09:09.449 fused_ordering(870) 00:09:09.449 fused_ordering(871) 00:09:09.449 fused_ordering(872) 00:09:09.449 fused_ordering(873) 00:09:09.449 fused_ordering(874) 00:09:09.449 fused_ordering(875) 00:09:09.449 fused_ordering(876) 00:09:09.449 fused_ordering(877) 00:09:09.449 fused_ordering(878) 00:09:09.449 fused_ordering(879) 00:09:09.449 fused_ordering(880) 00:09:09.449 fused_ordering(881) 00:09:09.449 fused_ordering(882) 00:09:09.449 fused_ordering(883) 00:09:09.449 fused_ordering(884) 00:09:09.449 fused_ordering(885) 00:09:09.449 fused_ordering(886) 00:09:09.449 fused_ordering(887) 00:09:09.449 fused_ordering(888) 00:09:09.449 fused_ordering(889) 00:09:09.449 fused_ordering(890) 00:09:09.449 fused_ordering(891) 00:09:09.449 fused_ordering(892) 00:09:09.449 fused_ordering(893) 00:09:09.449 fused_ordering(894) 00:09:09.449 fused_ordering(895) 00:09:09.449 fused_ordering(896) 00:09:09.449 fused_ordering(897) 00:09:09.449 fused_ordering(898) 00:09:09.449 fused_ordering(899) 00:09:09.449 fused_ordering(900) 00:09:09.449 fused_ordering(901) 00:09:09.449 fused_ordering(902) 00:09:09.449 fused_ordering(903) 00:09:09.449 fused_ordering(904) 00:09:09.449 fused_ordering(905) 00:09:09.449 fused_ordering(906) 00:09:09.449 fused_ordering(907) 00:09:09.449 fused_ordering(908) 00:09:09.449 fused_ordering(909) 00:09:09.449 fused_ordering(910) 00:09:09.449 fused_ordering(911) 00:09:09.449 fused_ordering(912) 00:09:09.449 fused_ordering(913) 00:09:09.449 fused_ordering(914) 00:09:09.449 fused_ordering(915) 00:09:09.449 fused_ordering(916) 00:09:09.449 fused_ordering(917) 00:09:09.449 fused_ordering(918) 00:09:09.449 fused_ordering(919) 00:09:09.449 fused_ordering(920) 00:09:09.449 fused_ordering(921) 00:09:09.449 fused_ordering(922) 00:09:09.449 fused_ordering(923) 00:09:09.449 fused_ordering(924) 00:09:09.449 fused_ordering(925) 00:09:09.449 fused_ordering(926) 00:09:09.449 fused_ordering(927) 00:09:09.449 fused_ordering(928) 00:09:09.449 fused_ordering(929) 00:09:09.449 fused_ordering(930) 00:09:09.449 fused_ordering(931) 00:09:09.449 fused_ordering(932) 00:09:09.449 fused_ordering(933) 00:09:09.449 fused_ordering(934) 00:09:09.449 fused_ordering(935) 00:09:09.449 fused_ordering(936) 00:09:09.449 fused_ordering(937) 00:09:09.449 fused_ordering(938) 00:09:09.449 fused_ordering(939) 00:09:09.449 fused_ordering(940) 00:09:09.449 fused_ordering(941) 00:09:09.449 fused_ordering(942) 00:09:09.449 fused_ordering(943) 00:09:09.449 fused_ordering(944) 00:09:09.449 fused_ordering(945) 00:09:09.449 fused_ordering(946) 00:09:09.449 fused_ordering(947) 00:09:09.449 fused_ordering(948) 00:09:09.449 fused_ordering(949) 00:09:09.449 fused_ordering(950) 00:09:09.449 fused_ordering(951) 00:09:09.449 fused_ordering(952) 00:09:09.449 fused_ordering(953) 00:09:09.449 fused_ordering(954) 00:09:09.449 fused_ordering(955) 00:09:09.449 fused_ordering(956) 00:09:09.449 fused_ordering(957) 00:09:09.449 fused_ordering(958) 00:09:09.449 fused_ordering(959) 00:09:09.449 fused_ordering(960) 00:09:09.449 fused_ordering(961) 00:09:09.449 fused_ordering(962) 00:09:09.449 fused_ordering(963) 00:09:09.449 fused_ordering(964) 00:09:09.449 fused_ordering(965) 00:09:09.449 fused_ordering(966) 00:09:09.449 fused_ordering(967) 00:09:09.449 fused_ordering(968) 00:09:09.449 fused_ordering(969) 00:09:09.449 fused_ordering(970) 00:09:09.449 fused_ordering(971) 00:09:09.449 fused_ordering(972) 00:09:09.449 fused_ordering(973) 00:09:09.449 fused_ordering(974) 00:09:09.449 fused_ordering(975) 00:09:09.449 fused_ordering(976) 00:09:09.449 fused_ordering(977) 00:09:09.449 fused_ordering(978) 00:09:09.449 fused_ordering(979) 00:09:09.449 fused_ordering(980) 00:09:09.449 fused_ordering(981) 00:09:09.449 fused_ordering(982) 00:09:09.449 fused_ordering(983) 00:09:09.449 fused_ordering(984) 00:09:09.449 fused_ordering(985) 00:09:09.449 fused_ordering(986) 00:09:09.449 fused_ordering(987) 00:09:09.449 fused_ordering(988) 00:09:09.449 fused_ordering(989) 00:09:09.449 fused_ordering(990) 00:09:09.449 fused_ordering(991) 00:09:09.449 fused_ordering(992) 00:09:09.449 fused_ordering(993) 00:09:09.449 fused_ordering(994) 00:09:09.449 fused_ordering(995) 00:09:09.449 fused_ordering(996) 00:09:09.449 fused_ordering(997) 00:09:09.449 fused_ordering(998) 00:09:09.449 fused_ordering(999) 00:09:09.449 fused_ordering(1000) 00:09:09.449 fused_ordering(1001) 00:09:09.449 fused_ordering(1002) 00:09:09.449 fused_ordering(1003) 00:09:09.449 fused_ordering(1004) 00:09:09.449 fused_ordering(1005) 00:09:09.449 fused_ordering(1006) 00:09:09.449 fused_ordering(1007) 00:09:09.449 fused_ordering(1008) 00:09:09.449 fused_ordering(1009) 00:09:09.449 fused_ordering(1010) 00:09:09.449 fused_ordering(1011) 00:09:09.449 fused_ordering(1012) 00:09:09.449 fused_ordering(1013) 00:09:09.449 fused_ordering(1014) 00:09:09.449 fused_ordering(1015) 00:09:09.449 fused_ordering(1016) 00:09:09.449 fused_ordering(1017) 00:09:09.449 fused_ordering(1018) 00:09:09.449 fused_ordering(1019) 00:09:09.449 fused_ordering(1020) 00:09:09.449 fused_ordering(1021) 00:09:09.449 fused_ordering(1022) 00:09:09.449 fused_ordering(1023) 00:09:09.449 12:53:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:09.449 12:53:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:09.449 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # nvmfcleanup 00:09:09.449 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:09:09.449 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.449 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:09:09.449 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.449 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.449 rmmod nvme_tcp 00:09:09.449 rmmod nvme_fabrics 00:09:09.707 rmmod nvme_keyring 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@493 -- # '[' -n 71434 ']' 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@494 -- # killprocess 71434 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71434 ']' 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71434 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71434 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:09.707 killing process with pid 71434 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71434' 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71434 00:09:09.707 12:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71434 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@282 -- # remove_spdk_ns 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:09:09.707 00:09:09.707 real 0m3.361s 00:09:09.707 user 0m4.098s 00:09:09.707 sys 0m1.254s 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.707 12:53:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:09.707 ************************************ 00:09:09.707 END TEST nvmf_fused_ordering 00:09:09.707 ************************************ 00:09:09.965 12:53:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.965 12:53:22 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:09.965 12:53:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.965 12:53:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.965 12:53:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.965 ************************************ 00:09:09.965 START TEST nvmf_delete_subsystem 00:09:09.965 ************************************ 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:09.965 * Looking for test storage... 00:09:09.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.965 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@452 -- # prepare_net_devs 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # local -g is_hw=no 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # remove_spdk_ns 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # nvmf_veth_init 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:09:09.965 Cannot find device "nvmf_tgt_br" 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.965 Cannot find device "nvmf_tgt_br2" 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # true 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:09:09.965 Cannot find device "nvmf_tgt_br" 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:09:09.965 Cannot find device "nvmf_tgt_br2" 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:09:09.965 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:09:10.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:09:10.223 00:09:10.223 --- 10.0.0.2 ping statistics --- 00:09:10.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.223 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:09:10.223 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.223 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:10.223 00:09:10.223 --- 10.0.0.3 ping statistics --- 00:09:10.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.223 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:10.223 00:09:10.223 --- 10.0.0.1 ping statistics --- 00:09:10.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.223 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@437 -- # return 0 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # nvmfpid=71656 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # waitforlisten 71656 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71656 ']' 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.223 12:53:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.481 [2024-07-15 12:53:22.713740] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:10.481 [2024-07-15 12:53:22.713865] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.481 [2024-07-15 12:53:22.853590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:10.481 [2024-07-15 12:53:22.923480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.481 [2024-07-15 12:53:22.923535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.481 [2024-07-15 12:53:22.923548] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.481 [2024-07-15 12:53:22.923558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.481 [2024-07-15 12:53:22.923566] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.481 [2024-07-15 12:53:22.924596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.481 [2024-07-15 12:53:22.924650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 [2024-07-15 12:53:23.051910] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 [2024-07-15 12:53:23.068350] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 NULL1 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 Delay0 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71688 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:10.740 12:53:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:11.000 [2024-07-15 12:53:23.262642] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:12.923 12:53:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.923 12:53:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.923 12:53:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 starting I/O failed: -6 00:09:12.923 Read completed with error (sct=0, sc=8) 00:09:12.923 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 [2024-07-15 12:53:25.300787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ec400d2f0 is same with the state(5) to be set 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 starting I/O failed: -6 00:09:12.924 [2024-07-15 12:53:25.301408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13668d0 is same with the state(5) to be set 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 [2024-07-15 12:53:25.301980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1389a80 is same with the state(5) to be set 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Write completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 Read completed with error (sct=0, sc=8) 00:09:12.924 [2024-07-15 12:53:25.302715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ec4000c00 is same with the state(5) to be set 00:09:13.859 [2024-07-15 12:53:26.279146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366510 is same with the state(5) to be set 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 [2024-07-15 12:53:26.300497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ec400d600 is same with the state(5) to be set 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 [2024-07-15 12:53:26.301113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13666f0 is same with the state(5) to be set 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 [2024-07-15 12:53:26.301332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13884c0 is same with the state(5) to be set 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Write completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 Read completed with error (sct=0, sc=8) 00:09:13.859 [2024-07-15 12:53:26.302978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ec400cfe0 is same with the state(5) to be set 00:09:13.859 Initializing NVMe Controllers 00:09:13.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.859 Controller IO queue size 128, less than required. 00:09:13.859 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:13.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:13.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:13.860 Initialization complete. Launching workers. 00:09:13.860 ======================================================== 00:09:13.860 Latency(us) 00:09:13.860 Device Information : IOPS MiB/s Average min max 00:09:13.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.06 0.08 890809.58 594.07 1014886.52 00:09:13.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.55 0.08 891232.01 910.26 1014884.47 00:09:13.860 ======================================================== 00:09:13.860 Total : 344.61 0.17 891021.10 594.07 1014886.52 00:09:13.860 00:09:13.860 [2024-07-15 12:53:26.303797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1366510 (9): Bad file descriptor 00:09:13.860 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.860 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:13.860 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:13.860 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71688 00:09:13.860 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71688 00:09:14.426 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71688) - No such process 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71688 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71688 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71688 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.426 [2024-07-15 12:53:26.823472] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.426 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.427 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.427 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.427 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.427 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.427 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71739 00:09:14.427 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:14.427 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:14.427 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71739 00:09:14.427 12:53:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:14.685 [2024-07-15 12:53:27.006498] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:14.943 12:53:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:14.943 12:53:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71739 00:09:14.944 12:53:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:15.509 12:53:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:15.510 12:53:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71739 00:09:15.510 12:53:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.075 12:53:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.075 12:53:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71739 00:09:16.075 12:53:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.640 12:53:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.640 12:53:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71739 00:09:16.640 12:53:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.898 12:53:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.898 12:53:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71739 00:09:16.898 12:53:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.463 12:53:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:17.463 12:53:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71739 00:09:17.463 12:53:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.721 Initializing NVMe Controllers 00:09:17.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:17.721 Controller IO queue size 128, less than required. 00:09:17.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:17.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:17.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:17.721 Initialization complete. Launching workers. 00:09:17.721 ======================================================== 00:09:17.721 Latency(us) 00:09:17.721 Device Information : IOPS MiB/s Average min max 00:09:17.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003702.62 1000128.01 1011166.58 00:09:17.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005084.21 1000145.38 1013491.43 00:09:17.721 ======================================================== 00:09:17.721 Total : 256.00 0.12 1004393.41 1000128.01 1013491.43 00:09:17.721 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71739 00:09:17.978 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71739) - No such process 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71739 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # nvmfcleanup 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.978 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.978 rmmod nvme_tcp 00:09:17.978 rmmod nvme_fabrics 00:09:17.978 rmmod nvme_keyring 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # '[' -n 71656 ']' 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # killprocess 71656 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71656 ']' 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71656 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71656 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.235 killing process with pid 71656 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71656' 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71656 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71656 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@282 -- # remove_spdk_ns 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:09:18.235 00:09:18.235 real 0m8.478s 00:09:18.235 user 0m26.895s 00:09:18.235 sys 0m1.492s 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.235 12:53:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.235 ************************************ 00:09:18.235 END TEST nvmf_delete_subsystem 00:09:18.235 ************************************ 00:09:18.493 12:53:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:18.493 12:53:30 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:18.493 12:53:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:18.493 12:53:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.493 12:53:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.493 ************************************ 00:09:18.493 START TEST nvmf_ns_masking 00:09:18.493 ************************************ 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:18.493 * Looking for test storage... 00:09:18.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.493 12:53:30 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.494 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e55be5b6-6015-497b-88b4-dc3394becdd5 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2d6f1f81-4964-4a37-9721-a7e37400bcf8 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=80a26547-20e8-437d-98e0-f8dedec5b88d 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@452 -- # prepare_net_devs 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # local -g is_hw=no 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # remove_spdk_ns 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@436 -- # nvmf_veth_init 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:09:18.494 Cannot find device "nvmf_tgt_br" 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:09:18.494 Cannot find device "nvmf_tgt_br2" 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # true 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:09:18.494 Cannot find device "nvmf_tgt_br" 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:09:18.494 Cannot find device "nvmf_tgt_br2" 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:18.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:09:18.494 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:18.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.752 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:09:18.752 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:09:18.752 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:18.752 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:18.752 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:18.752 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:18.752 12:53:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:09:18.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:18.752 00:09:18.752 --- 10.0.0.2 ping statistics --- 00:09:18.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.752 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:09:18.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:18.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:18.752 00:09:18.752 --- 10.0.0.3 ping statistics --- 00:09:18.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.752 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:18.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:18.752 00:09:18.752 --- 10.0.0.1 ping statistics --- 00:09:18.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.752 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@437 -- # return 0 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:18.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@485 -- # nvmfpid=71985 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@486 -- # waitforlisten 71985 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 71985 ']' 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.752 12:53:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:19.008 [2024-07-15 12:53:31.258907] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:19.008 [2024-07-15 12:53:31.259150] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.008 [2024-07-15 12:53:31.406284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.266 [2024-07-15 12:53:31.480405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.266 [2024-07-15 12:53:31.480466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.266 [2024-07-15 12:53:31.480480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.266 [2024-07-15 12:53:31.480490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.266 [2024-07-15 12:53:31.480500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.266 [2024-07-15 12:53:31.480535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.831 12:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.831 12:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:19.831 12:53:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:09:19.831 12:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:19.831 12:53:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:19.831 12:53:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.831 12:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:20.089 [2024-07-15 12:53:32.540882] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.347 12:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:20.347 12:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:20.347 12:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:20.606 Malloc1 00:09:20.606 12:53:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:20.863 Malloc2 00:09:20.863 12:53:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:21.121 12:53:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:21.379 12:53:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.638 [2024-07-15 12:53:33.943957] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.638 12:53:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:21.638 12:53:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 80a26547-20e8-437d-98e0-f8dedec5b88d -a 10.0.0.2 -s 4420 -i 4 00:09:21.638 12:53:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.638 12:53:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.638 12:53:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.638 12:53:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:21.638 12:53:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:23.655 12:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:23.655 12:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:23.655 12:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.655 12:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:23.655 12:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.655 12:53:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:23.655 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:23.655 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:23.913 [ 0]:0x1 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=804c7ff23f4b4ae4b112a3176070c5af 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 804c7ff23f4b4ae4b112a3176070c5af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.913 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:24.171 [ 0]:0x1 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=804c7ff23f4b4ae4b112a3176070c5af 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 804c7ff23f4b4ae4b112a3176070c5af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:24.171 [ 1]:0x2 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=46c643e6cee0407588fc5761608c96af 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 46c643e6cee0407588fc5761608c96af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:24.171 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.430 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.430 12:53:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:24.995 12:53:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:24.995 12:53:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 80a26547-20e8-437d-98e0-f8dedec5b88d -a 10.0.0.2 -s 4420 -i 4 00:09:24.995 12:53:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:24.995 12:53:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.995 12:53:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.995 12:53:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:24.995 12:53:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:24.995 12:53:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.895 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:27.153 [ 0]:0x2 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=46c643e6cee0407588fc5761608c96af 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 46c643e6cee0407588fc5761608c96af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.153 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:27.411 [ 0]:0x1 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=804c7ff23f4b4ae4b112a3176070c5af 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 804c7ff23f4b4ae4b112a3176070c5af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:27.411 [ 1]:0x2 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=46c643e6cee0407588fc5761608c96af 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 46c643e6cee0407588fc5761608c96af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.411 12:53:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:27.669 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:27.927 [ 0]:0x2 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=46c643e6cee0407588fc5761608c96af 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 46c643e6cee0407588fc5761608c96af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.927 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:28.185 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:28.185 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 80a26547-20e8-437d-98e0-f8dedec5b88d -a 10.0.0.2 -s 4420 -i 4 00:09:28.185 12:53:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:28.185 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:28.185 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.185 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:28.185 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:28.185 12:53:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:30.713 [ 0]:0x1 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=804c7ff23f4b4ae4b112a3176070c5af 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 804c7ff23f4b4ae4b112a3176070c5af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:30.713 [ 1]:0x2 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=46c643e6cee0407588fc5761608c96af 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 46c643e6cee0407588fc5761608c96af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.713 12:53:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:30.713 [ 0]:0x2 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.713 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=46c643e6cee0407588fc5761608c96af 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 46c643e6cee0407588fc5761608c96af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:31.050 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:31.050 [2024-07-15 12:53:43.477907] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:31.050 2024/07/15 12:53:43 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:09:31.050 request: 00:09:31.050 { 00:09:31.050 "method": "nvmf_ns_remove_host", 00:09:31.050 "params": { 00:09:31.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.050 "nsid": 2, 00:09:31.050 "host": "nqn.2016-06.io.spdk:host1" 00:09:31.050 } 00:09:31.050 } 00:09:31.050 Got JSON-RPC error response 00:09:31.050 GoRPCClient: error on JSON-RPC call 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:31.308 [ 0]:0x2 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=46c643e6cee0407588fc5761608c96af 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 46c643e6cee0407588fc5761608c96af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72362 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72362 /var/tmp/host.sock 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72362 ']' 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.308 12:53:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:31.308 [2024-07-15 12:53:43.725954] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:31.308 [2024-07-15 12:53:43.726043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72362 ] 00:09:31.566 [2024-07-15 12:53:43.857088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.566 [2024-07-15 12:53:43.917474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.824 12:53:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.824 12:53:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:31.824 12:53:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.082 12:53:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:32.340 12:53:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e55be5b6-6015-497b-88b4-dc3394becdd5 00:09:32.340 12:53:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@763 -- # tr -d - 00:09:32.340 12:53:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E55BE5B66015497B88B4DC3394BECDD5 -i 00:09:32.597 12:53:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2d6f1f81-4964-4a37-9721-a7e37400bcf8 00:09:32.597 12:53:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@763 -- # tr -d - 00:09:32.597 12:53:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2D6F1F8149644A379721A7E37400BCF8 -i 00:09:32.855 12:53:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:33.113 12:53:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:33.370 12:53:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:33.370 12:53:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:33.936 nvme0n1 00:09:33.936 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:33.936 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:34.195 nvme1n2 00:09:34.195 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:34.195 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:34.195 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:34.195 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:34.195 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:34.761 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:34.761 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:34.761 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:34.761 12:53:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:34.761 12:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e55be5b6-6015-497b-88b4-dc3394becdd5 == \e\5\5\b\e\5\b\6\-\6\0\1\5\-\4\9\7\b\-\8\8\b\4\-\d\c\3\3\9\4\b\e\c\d\d\5 ]] 00:09:34.761 12:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:34.761 12:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:34.761 12:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:35.018 12:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2d6f1f81-4964-4a37-9721-a7e37400bcf8 == \2\d\6\f\1\f\8\1\-\4\9\6\4\-\4\a\3\7\-\9\7\2\1\-\a\7\e\3\7\4\0\0\b\c\f\8 ]] 00:09:35.018 12:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72362 00:09:35.018 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72362 ']' 00:09:35.018 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72362 00:09:35.019 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:35.019 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:35.019 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72362 00:09:35.277 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:35.277 killing process with pid 72362 00:09:35.277 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:35.277 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72362' 00:09:35.277 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72362 00:09:35.277 12:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72362 00:09:35.535 12:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # nvmfcleanup 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.793 rmmod nvme_tcp 00:09:35.793 rmmod nvme_fabrics 00:09:35.793 rmmod nvme_keyring 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@493 -- # '[' -n 71985 ']' 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@494 -- # killprocess 71985 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 71985 ']' 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 71985 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71985 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:35.793 killing process with pid 71985 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71985' 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 71985 00:09:35.793 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 71985 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@282 -- # remove_spdk_ns 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:09:36.052 00:09:36.052 real 0m17.649s 00:09:36.052 user 0m28.412s 00:09:36.052 sys 0m2.448s 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.052 12:53:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:36.052 ************************************ 00:09:36.052 END TEST nvmf_ns_masking 00:09:36.052 ************************************ 00:09:36.052 12:53:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:36.052 12:53:48 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:09:36.052 12:53:48 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:09:36.052 12:53:48 nvmf_tcp -- nvmf/nvmf.sh@46 -- # [[ '' -eq 1 ]] 00:09:36.052 12:53:48 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:36.052 12:53:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.052 12:53:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.052 12:53:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.052 ************************************ 00:09:36.052 START TEST nvmf_host_management 00:09:36.052 ************************************ 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:36.052 * Looking for test storage... 00:09:36.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.052 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@452 -- # prepare_net_devs 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # local -g is_hw=no 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # remove_spdk_ns 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@436 -- # nvmf_veth_init 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:09:36.319 Cannot find device "nvmf_tgt_br" 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.319 Cannot find device "nvmf_tgt_br2" 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:09:36.319 Cannot find device "nvmf_tgt_br" 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:09:36.319 Cannot find device "nvmf_tgt_br2" 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:36.319 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:09:36.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:09:36.579 00:09:36.579 --- 10.0.0.2 ping statistics --- 00:09:36.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.579 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:09:36.579 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:36.579 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:09:36.579 00:09:36.579 --- 10.0.0.3 ping statistics --- 00:09:36.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.579 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:36.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:36.579 00:09:36.579 --- 10.0.0.1 ping statistics --- 00:09:36.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.579 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@437 -- # return 0 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@485 -- # nvmfpid=72716 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@486 -- # waitforlisten 72716 00:09:36.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72716 ']' 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.579 12:53:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.579 [2024-07-15 12:53:48.961570] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:36.579 [2024-07-15 12:53:48.961694] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.837 [2024-07-15 12:53:49.119540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.837 [2024-07-15 12:53:49.199058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.837 [2024-07-15 12:53:49.199291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.837 [2024-07-15 12:53:49.199437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.837 [2024-07-15 12:53:49.199561] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.837 [2024-07-15 12:53:49.199598] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.837 [2024-07-15 12:53:49.200123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.837 [2024-07-15 12:53:49.200318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.837 [2024-07-15 12:53:49.200183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.837 [2024-07-15 12:53:49.200314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 [2024-07-15 12:53:49.915130] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 Malloc0 00:09:37.772 [2024-07-15 12:53:49.979875] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.772 12:53:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72793 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72793 /var/tmp/bdevperf.sock 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72793 ']' 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@536 -- # config=() 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@536 -- # local subsystem config 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:09:37.772 { 00:09:37.772 "params": { 00:09:37.772 "name": "Nvme$subsystem", 00:09:37.772 "trtype": "$TEST_TRANSPORT", 00:09:37.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.772 "adrfam": "ipv4", 00:09:37.772 "trsvcid": "$NVMF_PORT", 00:09:37.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.772 "hdgst": ${hdgst:-false}, 00:09:37.772 "ddgst": ${ddgst:-false} 00:09:37.772 }, 00:09:37.772 "method": "bdev_nvme_attach_controller" 00:09:37.772 } 00:09:37.772 EOF 00:09:37.772 )") 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # cat 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@560 -- # jq . 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@561 -- # IFS=, 00:09:37.772 12:53:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:09:37.772 "params": { 00:09:37.772 "name": "Nvme0", 00:09:37.772 "trtype": "tcp", 00:09:37.772 "traddr": "10.0.0.2", 00:09:37.772 "adrfam": "ipv4", 00:09:37.772 "trsvcid": "4420", 00:09:37.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:37.772 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:37.772 "hdgst": false, 00:09:37.772 "ddgst": false 00:09:37.772 }, 00:09:37.772 "method": "bdev_nvme_attach_controller" 00:09:37.772 }' 00:09:37.772 [2024-07-15 12:53:50.088815] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:37.772 [2024-07-15 12:53:50.088921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72793 ] 00:09:38.029 [2024-07-15 12:53:50.259294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.029 [2024-07-15 12:53:50.331020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.029 Running I/O for 10 seconds... 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.002 [2024-07-15 12:53:51.237021] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.237300] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.237609] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.237863] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238023] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238176] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238252] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238358] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238374] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238384] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238394] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238405] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238415] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238424] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238434] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 [2024-07-15 12:53:51.238444] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e4310 is same with the state(5) to be set 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.002 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.002 [2024-07-15 12:53:51.245708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:39.002 [2024-07-15 12:53:51.245756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.002 [2024-07-15 12:53:51.245787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:39.002 [2024-07-15 12:53:51.245798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.002 [2024-07-15 12:53:51.245809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:39.002 [2024-07-15 12:53:51.245818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.002 [2024-07-15 12:53:51.245828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:39.002 [2024-07-15 12:53:51.245838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.245847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c28af0 is same with the state(5) to be set 00:09:39.003 [2024-07-15 12:53:51.246571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.246985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.246995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.247743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.247752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.248232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.248305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.248516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.248677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.248825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.248929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.249051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.249243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.249387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.249541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.249714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.249910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.250102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.250234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.250338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.250354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.003 [2024-07-15 12:53:51.250372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.003 [2024-07-15 12:53:51.250389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.004 [2024-07-15 12:53:51.250402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.004 [2024-07-15 12:53:51.250415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.004 [2024-07-15 12:53:51.250427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:39.004 [2024-07-15 12:53:51.250436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.004 [2024-07-15 12:53:51.250503] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c28820 was disconnected and freed. reset controller. 00:09:39.004 [2024-07-15 12:53:51.251693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:39.004 task offset: 8192 on job bdev=Nvme0n1 fails 00:09:39.004 00:09:39.004 Latency(us) 00:09:39.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.004 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:39.004 Job: Nvme0n1 ended in about 0.78 seconds with error 00:09:39.004 Verification LBA range: start 0x0 length 0x400 00:09:39.004 Nvme0n1 : 0.78 1400.78 87.55 82.40 0.00 42149.29 4408.79 39321.60 00:09:39.004 =================================================================================================================== 00:09:39.004 Total : 1400.78 87.55 82.40 0.00 42149.29 4408.79 39321.60 00:09:39.004 12:53:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.004 12:53:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:39.004 [2024-07-15 12:53:51.253737] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:39.004 [2024-07-15 12:53:51.253776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c28af0 (9): Bad file descriptor 00:09:39.004 [2024-07-15 12:53:51.262391] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72793 00:09:39.958 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72793) - No such process 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@536 -- # config=() 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@536 -- # local subsystem config 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:09:39.958 { 00:09:39.958 "params": { 00:09:39.958 "name": "Nvme$subsystem", 00:09:39.958 "trtype": "$TEST_TRANSPORT", 00:09:39.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.958 "adrfam": "ipv4", 00:09:39.958 "trsvcid": "$NVMF_PORT", 00:09:39.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.958 "hdgst": ${hdgst:-false}, 00:09:39.958 "ddgst": ${ddgst:-false} 00:09:39.958 }, 00:09:39.958 "method": "bdev_nvme_attach_controller" 00:09:39.958 } 00:09:39.958 EOF 00:09:39.958 )") 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # cat 00:09:39.958 12:53:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@560 -- # jq . 00:09:39.959 12:53:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@561 -- # IFS=, 00:09:39.959 12:53:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:09:39.959 "params": { 00:09:39.959 "name": "Nvme0", 00:09:39.959 "trtype": "tcp", 00:09:39.959 "traddr": "10.0.0.2", 00:09:39.959 "adrfam": "ipv4", 00:09:39.959 "trsvcid": "4420", 00:09:39.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:39.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:39.959 "hdgst": false, 00:09:39.959 "ddgst": false 00:09:39.959 }, 00:09:39.959 "method": "bdev_nvme_attach_controller" 00:09:39.959 }' 00:09:39.959 [2024-07-15 12:53:52.316473] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:39.959 [2024-07-15 12:53:52.316573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72843 ] 00:09:40.217 [2024-07-15 12:53:52.455648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.217 [2024-07-15 12:53:52.525014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.217 Running I/O for 1 seconds... 00:09:41.592 00:09:41.592 Latency(us) 00:09:41.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.592 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:41.592 Verification LBA range: start 0x0 length 0x400 00:09:41.592 Nvme0n1 : 1.00 1531.54 95.72 0.00 0.00 40834.20 4706.68 41704.73 00:09:41.592 =================================================================================================================== 00:09:41.592 Total : 1531.54 95.72 0.00 0.00 40834.20 4706.68 41704.73 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # nvmfcleanup 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.592 rmmod nvme_tcp 00:09:41.592 rmmod nvme_fabrics 00:09:41.592 rmmod nvme_keyring 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@493 -- # '[' -n 72716 ']' 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@494 -- # killprocess 72716 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72716 ']' 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72716 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72716 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72716' 00:09:41.592 killing process with pid 72716 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72716 00:09:41.592 12:53:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72716 00:09:41.850 [2024-07-15 12:53:54.102762] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@282 -- # remove_spdk_ns 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:41.850 00:09:41.850 real 0m5.736s 00:09:41.850 user 0m22.483s 00:09:41.850 sys 0m1.256s 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.850 ************************************ 00:09:41.850 END TEST nvmf_host_management 00:09:41.850 12:53:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.850 ************************************ 00:09:41.850 12:53:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:41.850 12:53:54 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:41.850 12:53:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:41.850 12:53:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.850 12:53:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.850 ************************************ 00:09:41.850 START TEST nvmf_lvol 00:09:41.850 ************************************ 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:41.850 * Looking for test storage... 00:09:41.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.850 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.108 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:42.108 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@452 -- # prepare_net_devs 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # local -g is_hw=no 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # remove_spdk_ns 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@436 -- # nvmf_veth_init 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:09:42.109 Cannot find device "nvmf_tgt_br" 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.109 Cannot find device "nvmf_tgt_br2" 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # true 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:09:42.109 Cannot find device "nvmf_tgt_br" 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:09:42.109 Cannot find device "nvmf_tgt_br2" 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:42.109 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:09:42.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:09:42.367 00:09:42.367 --- 10.0.0.2 ping statistics --- 00:09:42.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.367 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:09:42.367 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:42.367 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:09:42.367 00:09:42.367 --- 10.0.0.3 ping statistics --- 00:09:42.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.367 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:42.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:42.367 00:09:42.367 --- 10.0.0.1 ping statistics --- 00:09:42.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.367 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@437 -- # return 0 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@485 -- # nvmfpid=73045 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@486 -- # waitforlisten 73045 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73045 ']' 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:42.367 12:53:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:42.367 [2024-07-15 12:53:54.772415] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:42.367 [2024-07-15 12:53:54.772529] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.624 [2024-07-15 12:53:54.912460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:42.624 [2024-07-15 12:53:54.982164] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.625 [2024-07-15 12:53:54.982230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.625 [2024-07-15 12:53:54.982246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.625 [2024-07-15 12:53:54.982256] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.625 [2024-07-15 12:53:54.982265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.625 [2024-07-15 12:53:54.982559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.625 [2024-07-15 12:53:54.982637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.625 [2024-07-15 12:53:54.982646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.559 12:53:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.559 12:53:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:43.559 12:53:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:09:43.559 12:53:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:43.559 12:53:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:43.559 12:53:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.559 12:53:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:43.816 [2024-07-15 12:53:56.096950] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.816 12:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.074 12:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:44.074 12:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.332 12:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:44.332 12:53:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:44.591 12:53:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:45.157 12:53:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6e952fcd-160d-452e-91d4-2c014413dd18 00:09:45.157 12:53:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e952fcd-160d-452e-91d4-2c014413dd18 lvol 20 00:09:45.415 12:53:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=658c46ee-9a4d-42d9-bd5b-844c6d1dbe46 00:09:45.415 12:53:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:45.673 12:53:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 658c46ee-9a4d-42d9-bd5b-844c6d1dbe46 00:09:45.931 12:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:46.189 [2024-07-15 12:53:58.529893] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.189 12:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.448 12:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73198 00:09:46.448 12:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:46.448 12:53:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:47.435 12:53:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 658c46ee-9a4d-42d9-bd5b-844c6d1dbe46 MY_SNAPSHOT 00:09:48.001 12:54:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=23a925c9-56c8-4b2d-a4f4-d931f92a24ca 00:09:48.001 12:54:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 658c46ee-9a4d-42d9-bd5b-844c6d1dbe46 30 00:09:48.260 12:54:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 23a925c9-56c8-4b2d-a4f4-d931f92a24ca MY_CLONE 00:09:48.518 12:54:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3fd98cd3-16eb-488d-8307-58d35cd1a570 00:09:48.518 12:54:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 3fd98cd3-16eb-488d-8307-58d35cd1a570 00:09:49.454 12:54:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73198 00:09:57.560 Initializing NVMe Controllers 00:09:57.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:57.560 Controller IO queue size 128, less than required. 00:09:57.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:57.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:57.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:57.560 Initialization complete. Launching workers. 00:09:57.560 ======================================================== 00:09:57.560 Latency(us) 00:09:57.560 Device Information : IOPS MiB/s Average min max 00:09:57.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10547.30 41.20 12146.22 715.15 74634.58 00:09:57.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10274.10 40.13 12467.48 3229.93 69969.46 00:09:57.560 ======================================================== 00:09:57.560 Total : 20821.40 81.33 12304.74 715.15 74634.58 00:09:57.560 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 658c46ee-9a4d-42d9-bd5b-844c6d1dbe46 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e952fcd-160d-452e-91d4-2c014413dd18 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # nvmfcleanup 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.560 12:54:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.560 rmmod nvme_tcp 00:09:57.560 rmmod nvme_fabrics 00:09:57.560 rmmod nvme_keyring 00:09:57.560 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.560 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:57.560 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:57.560 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@493 -- # '[' -n 73045 ']' 00:09:57.560 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@494 -- # killprocess 73045 00:09:57.560 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73045 ']' 00:09:57.560 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73045 00:09:57.560 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:57.560 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73045 00:09:57.818 killing process with pid 73045 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73045' 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73045 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73045 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@282 -- # remove_spdk_ns 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:09:57.818 00:09:57.818 real 0m16.055s 00:09:57.818 user 1m7.226s 00:09:57.818 sys 0m3.846s 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.818 ************************************ 00:09:57.818 END TEST nvmf_lvol 00:09:57.818 12:54:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:57.818 ************************************ 00:09:58.076 12:54:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:58.076 12:54:10 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:58.076 12:54:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:58.076 12:54:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.076 12:54:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.076 ************************************ 00:09:58.076 START TEST nvmf_lvs_grow 00:09:58.076 ************************************ 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:58.076 * Looking for test storage... 00:09:58.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.076 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.077 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@452 -- # prepare_net_devs 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # local -g is_hw=no 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # remove_spdk_ns 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@436 -- # nvmf_veth_init 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:09:58.077 Cannot find device "nvmf_tgt_br" 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.077 Cannot find device "nvmf_tgt_br2" 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # true 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:09:58.077 Cannot find device "nvmf_tgt_br" 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:09:58.077 Cannot find device "nvmf_tgt_br2" 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:09:58.077 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:09:58.337 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:09:58.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:09:58.338 00:09:58.338 --- 10.0.0.2 ping statistics --- 00:09:58.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.338 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:09:58.338 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:58.338 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:09:58.338 00:09:58.338 --- 10.0.0.3 ping statistics --- 00:09:58.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.338 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:58.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:58.338 00:09:58.338 --- 10.0.0.1 ping statistics --- 00:09:58.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.338 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@437 -- # return 0 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:09:58.338 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:58.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@485 -- # nvmfpid=73564 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@486 -- # waitforlisten 73564 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73564 ']' 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.339 12:54:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:58.604 [2024-07-15 12:54:10.825987] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:09:58.604 [2024-07-15 12:54:10.826086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.604 [2024-07-15 12:54:10.965152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.604 [2024-07-15 12:54:11.033369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.604 [2024-07-15 12:54:11.033429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.604 [2024-07-15 12:54:11.033443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.604 [2024-07-15 12:54:11.033463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.604 [2024-07-15 12:54:11.033472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.604 [2024-07-15 12:54:11.033499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.862 12:54:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.862 12:54:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:58.862 12:54:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:09:58.862 12:54:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:58.862 12:54:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:58.862 12:54:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.862 12:54:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:59.122 [2024-07-15 12:54:11.410090] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:59.122 ************************************ 00:09:59.122 START TEST lvs_grow_clean 00:09:59.122 ************************************ 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:59.122 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:59.381 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:59.381 12:54:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:59.639 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:09:59.639 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:09:59.639 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:59.897 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:59.897 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:59.897 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 lvol 150 00:10:00.464 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9b177a15-527e-45e0-9b2c-c11bcf018619 00:10:00.464 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:00.464 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:00.464 [2024-07-15 12:54:12.925711] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:00.464 [2024-07-15 12:54:12.925818] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:00.464 true 00:10:00.722 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:00.722 12:54:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:00.985 12:54:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:00.985 12:54:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:01.243 12:54:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b177a15-527e-45e0-9b2c-c11bcf018619 00:10:01.500 12:54:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:01.757 [2024-07-15 12:54:14.118478] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.757 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73718 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73718 /var/tmp/bdevperf.sock 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73718 ']' 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.015 12:54:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:02.272 [2024-07-15 12:54:14.533137] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:02.272 [2024-07-15 12:54:14.533262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73718 ] 00:10:02.272 [2024-07-15 12:54:14.683062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.531 [2024-07-15 12:54:14.771271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.465 12:54:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.465 12:54:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:10:03.465 12:54:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:03.465 Nvme0n1 00:10:03.465 12:54:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:03.724 [ 00:10:03.724 { 00:10:03.724 "aliases": [ 00:10:03.724 "9b177a15-527e-45e0-9b2c-c11bcf018619" 00:10:03.724 ], 00:10:03.724 "assigned_rate_limits": { 00:10:03.724 "r_mbytes_per_sec": 0, 00:10:03.724 "rw_ios_per_sec": 0, 00:10:03.724 "rw_mbytes_per_sec": 0, 00:10:03.724 "w_mbytes_per_sec": 0 00:10:03.724 }, 00:10:03.724 "block_size": 4096, 00:10:03.724 "claimed": false, 00:10:03.724 "driver_specific": { 00:10:03.724 "mp_policy": "active_passive", 00:10:03.724 "nvme": [ 00:10:03.724 { 00:10:03.724 "ctrlr_data": { 00:10:03.724 "ana_reporting": false, 00:10:03.724 "cntlid": 1, 00:10:03.724 "firmware_revision": "24.09", 00:10:03.724 "model_number": "SPDK bdev Controller", 00:10:03.724 "multi_ctrlr": true, 00:10:03.724 "oacs": { 00:10:03.724 "firmware": 0, 00:10:03.724 "format": 0, 00:10:03.724 "ns_manage": 0, 00:10:03.724 "security": 0 00:10:03.724 }, 00:10:03.724 "serial_number": "SPDK0", 00:10:03.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:03.724 "vendor_id": "0x8086" 00:10:03.724 }, 00:10:03.724 "ns_data": { 00:10:03.724 "can_share": true, 00:10:03.724 "id": 1 00:10:03.724 }, 00:10:03.724 "trid": { 00:10:03.725 "adrfam": "IPv4", 00:10:03.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:03.725 "traddr": "10.0.0.2", 00:10:03.725 "trsvcid": "4420", 00:10:03.725 "trtype": "TCP" 00:10:03.725 }, 00:10:03.725 "vs": { 00:10:03.725 "nvme_version": "1.3" 00:10:03.725 } 00:10:03.725 } 00:10:03.725 ] 00:10:03.725 }, 00:10:03.725 "memory_domains": [ 00:10:03.725 { 00:10:03.725 "dma_device_id": "system", 00:10:03.725 "dma_device_type": 1 00:10:03.725 } 00:10:03.725 ], 00:10:03.725 "name": "Nvme0n1", 00:10:03.725 "num_blocks": 38912, 00:10:03.725 "product_name": "NVMe disk", 00:10:03.725 "supported_io_types": { 00:10:03.725 "abort": true, 00:10:03.725 "compare": true, 00:10:03.725 "compare_and_write": true, 00:10:03.725 "copy": true, 00:10:03.725 "flush": true, 00:10:03.725 "get_zone_info": false, 00:10:03.725 "nvme_admin": true, 00:10:03.725 "nvme_io": true, 00:10:03.725 "nvme_io_md": false, 00:10:03.725 "nvme_iov_md": false, 00:10:03.725 "read": true, 00:10:03.725 "reset": true, 00:10:03.725 "seek_data": false, 00:10:03.725 "seek_hole": false, 00:10:03.725 "unmap": true, 00:10:03.725 "write": true, 00:10:03.725 "write_zeroes": true, 00:10:03.725 "zcopy": false, 00:10:03.725 "zone_append": false, 00:10:03.725 "zone_management": false 00:10:03.725 }, 00:10:03.725 "uuid": "9b177a15-527e-45e0-9b2c-c11bcf018619", 00:10:03.725 "zoned": false 00:10:03.725 } 00:10:03.725 ] 00:10:03.984 12:54:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:03.984 12:54:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73765 00:10:03.984 12:54:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:03.984 Running I/O for 10 seconds... 00:10:05.027 Latency(us) 00:10:05.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.027 Nvme0n1 : 1.00 8063.00 31.50 0.00 0.00 0.00 0.00 0.00 00:10:05.027 =================================================================================================================== 00:10:05.027 Total : 8063.00 31.50 0.00 0.00 0.00 0.00 0.00 00:10:05.027 00:10:05.960 12:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:05.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.960 Nvme0n1 : 2.00 8208.00 32.06 0.00 0.00 0.00 0.00 0.00 00:10:05.960 =================================================================================================================== 00:10:05.960 Total : 8208.00 32.06 0.00 0.00 0.00 0.00 0.00 00:10:05.960 00:10:06.219 true 00:10:06.219 12:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:06.219 12:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:06.477 12:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:06.477 12:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:06.477 12:54:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73765 00:10:07.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.042 Nvme0n1 : 3.00 8163.67 31.89 0.00 0.00 0.00 0.00 0.00 00:10:07.042 =================================================================================================================== 00:10:07.042 Total : 8163.67 31.89 0.00 0.00 0.00 0.00 0.00 00:10:07.042 00:10:07.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.977 Nvme0n1 : 4.00 8043.25 31.42 0.00 0.00 0.00 0.00 0.00 00:10:07.977 =================================================================================================================== 00:10:07.977 Total : 8043.25 31.42 0.00 0.00 0.00 0.00 0.00 00:10:07.977 00:10:08.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.906 Nvme0n1 : 5.00 7934.60 30.99 0.00 0.00 0.00 0.00 0.00 00:10:08.906 =================================================================================================================== 00:10:08.906 Total : 7934.60 30.99 0.00 0.00 0.00 0.00 0.00 00:10:08.906 00:10:09.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.838 Nvme0n1 : 6.00 7905.17 30.88 0.00 0.00 0.00 0.00 0.00 00:10:09.838 =================================================================================================================== 00:10:09.838 Total : 7905.17 30.88 0.00 0.00 0.00 0.00 0.00 00:10:09.838 00:10:11.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.208 Nvme0n1 : 7.00 7885.00 30.80 0.00 0.00 0.00 0.00 0.00 00:10:11.208 =================================================================================================================== 00:10:11.208 Total : 7885.00 30.80 0.00 0.00 0.00 0.00 0.00 00:10:11.208 00:10:12.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.147 Nvme0n1 : 8.00 7848.50 30.66 0.00 0.00 0.00 0.00 0.00 00:10:12.147 =================================================================================================================== 00:10:12.147 Total : 7848.50 30.66 0.00 0.00 0.00 0.00 0.00 00:10:12.147 00:10:13.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.082 Nvme0n1 : 9.00 7732.78 30.21 0.00 0.00 0.00 0.00 0.00 00:10:13.082 =================================================================================================================== 00:10:13.082 Total : 7732.78 30.21 0.00 0.00 0.00 0.00 0.00 00:10:13.082 00:10:14.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.015 Nvme0n1 : 10.00 7614.50 29.74 0.00 0.00 0.00 0.00 0.00 00:10:14.015 =================================================================================================================== 00:10:14.015 Total : 7614.50 29.74 0.00 0.00 0.00 0.00 0.00 00:10:14.015 00:10:14.015 00:10:14.015 Latency(us) 00:10:14.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.015 Nvme0n1 : 10.00 7623.65 29.78 0.00 0.00 16782.36 7864.32 75783.45 00:10:14.015 =================================================================================================================== 00:10:14.015 Total : 7623.65 29.78 0.00 0.00 16782.36 7864.32 75783.45 00:10:14.015 0 00:10:14.015 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73718 00:10:14.015 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73718 ']' 00:10:14.015 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73718 00:10:14.015 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:10:14.016 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:14.016 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73718 00:10:14.016 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:14.016 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:14.016 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73718' 00:10:14.016 killing process with pid 73718 00:10:14.016 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73718 00:10:14.016 Received shutdown signal, test time was about 10.000000 seconds 00:10:14.016 00:10:14.016 Latency(us) 00:10:14.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.016 =================================================================================================================== 00:10:14.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:14.016 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73718 00:10:14.274 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.532 12:54:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:14.791 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:14.791 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:15.048 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:15.048 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:15.048 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:15.306 [2024-07-15 12:54:27.637406] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:15.306 12:54:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:15.564 2024/07/15 12:54:28 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f768c2ef-fdb3-45df-9d69-c092ec9ff6f7], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:15.564 request: 00:10:15.564 { 00:10:15.564 "method": "bdev_lvol_get_lvstores", 00:10:15.564 "params": { 00:10:15.564 "uuid": "f768c2ef-fdb3-45df-9d69-c092ec9ff6f7" 00:10:15.564 } 00:10:15.564 } 00:10:15.564 Got JSON-RPC error response 00:10:15.564 GoRPCClient: error on JSON-RPC call 00:10:15.564 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:15.564 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:15.564 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:15.564 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:15.564 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:16.131 aio_bdev 00:10:16.131 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9b177a15-527e-45e0-9b2c-c11bcf018619 00:10:16.131 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=9b177a15-527e-45e0-9b2c-c11bcf018619 00:10:16.131 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:16.131 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:16.131 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:16.131 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:16.131 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:16.131 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b177a15-527e-45e0-9b2c-c11bcf018619 -t 2000 00:10:16.390 [ 00:10:16.390 { 00:10:16.390 "aliases": [ 00:10:16.390 "lvs/lvol" 00:10:16.390 ], 00:10:16.390 "assigned_rate_limits": { 00:10:16.390 "r_mbytes_per_sec": 0, 00:10:16.390 "rw_ios_per_sec": 0, 00:10:16.390 "rw_mbytes_per_sec": 0, 00:10:16.390 "w_mbytes_per_sec": 0 00:10:16.390 }, 00:10:16.390 "block_size": 4096, 00:10:16.390 "claimed": false, 00:10:16.390 "driver_specific": { 00:10:16.390 "lvol": { 00:10:16.390 "base_bdev": "aio_bdev", 00:10:16.390 "clone": false, 00:10:16.390 "esnap_clone": false, 00:10:16.390 "lvol_store_uuid": "f768c2ef-fdb3-45df-9d69-c092ec9ff6f7", 00:10:16.390 "num_allocated_clusters": 38, 00:10:16.390 "snapshot": false, 00:10:16.390 "thin_provision": false 00:10:16.390 } 00:10:16.390 }, 00:10:16.390 "name": "9b177a15-527e-45e0-9b2c-c11bcf018619", 00:10:16.390 "num_blocks": 38912, 00:10:16.390 "product_name": "Logical Volume", 00:10:16.390 "supported_io_types": { 00:10:16.390 "abort": false, 00:10:16.390 "compare": false, 00:10:16.390 "compare_and_write": false, 00:10:16.390 "copy": false, 00:10:16.390 "flush": false, 00:10:16.390 "get_zone_info": false, 00:10:16.390 "nvme_admin": false, 00:10:16.390 "nvme_io": false, 00:10:16.390 "nvme_io_md": false, 00:10:16.390 "nvme_iov_md": false, 00:10:16.390 "read": true, 00:10:16.390 "reset": true, 00:10:16.390 "seek_data": true, 00:10:16.390 "seek_hole": true, 00:10:16.390 "unmap": true, 00:10:16.390 "write": true, 00:10:16.390 "write_zeroes": true, 00:10:16.390 "zcopy": false, 00:10:16.390 "zone_append": false, 00:10:16.390 "zone_management": false 00:10:16.390 }, 00:10:16.390 "uuid": "9b177a15-527e-45e0-9b2c-c11bcf018619", 00:10:16.390 "zoned": false 00:10:16.390 } 00:10:16.390 ] 00:10:16.390 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:16.390 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:16.390 12:54:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:16.648 12:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:16.648 12:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:16.648 12:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:17.213 12:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:17.213 12:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9b177a15-527e-45e0-9b2c-c11bcf018619 00:10:17.213 12:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f768c2ef-fdb3-45df-9d69-c092ec9ff6f7 00:10:17.528 12:54:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:17.786 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:18.352 00:10:18.352 real 0m19.094s 00:10:18.352 user 0m18.626s 00:10:18.352 sys 0m2.087s 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:18.352 ************************************ 00:10:18.352 END TEST lvs_grow_clean 00:10:18.352 ************************************ 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:18.352 ************************************ 00:10:18.352 START TEST lvs_grow_dirty 00:10:18.352 ************************************ 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:18.352 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:18.610 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:18.610 12:54:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:18.868 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:18.868 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:18.868 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:19.124 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:19.124 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:19.124 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 lvol 150 00:10:19.382 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2afc56ad-b826-4ef3-b577-3bc6fa21bf37 00:10:19.382 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:19.382 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:19.640 [2024-07-15 12:54:31.942708] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:19.641 [2024-07-15 12:54:31.942810] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:19.641 true 00:10:19.641 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:19.641 12:54:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:19.899 12:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:19.899 12:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:20.158 12:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2afc56ad-b826-4ef3-b577-3bc6fa21bf37 00:10:20.417 12:54:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:20.675 [2024-07-15 12:54:33.067318] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.675 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74169 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74169 /var/tmp/bdevperf.sock 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74169 ']' 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.932 12:54:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:20.932 [2024-07-15 12:54:33.393927] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:20.932 [2024-07-15 12:54:33.394048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74169 ] 00:10:21.189 [2024-07-15 12:54:33.537579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.189 [2024-07-15 12:54:33.607337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.122 12:54:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.122 12:54:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:22.122 12:54:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:22.379 Nvme0n1 00:10:22.379 12:54:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:22.638 [ 00:10:22.638 { 00:10:22.638 "aliases": [ 00:10:22.638 "2afc56ad-b826-4ef3-b577-3bc6fa21bf37" 00:10:22.638 ], 00:10:22.638 "assigned_rate_limits": { 00:10:22.638 "r_mbytes_per_sec": 0, 00:10:22.638 "rw_ios_per_sec": 0, 00:10:22.638 "rw_mbytes_per_sec": 0, 00:10:22.638 "w_mbytes_per_sec": 0 00:10:22.638 }, 00:10:22.638 "block_size": 4096, 00:10:22.638 "claimed": false, 00:10:22.638 "driver_specific": { 00:10:22.638 "mp_policy": "active_passive", 00:10:22.638 "nvme": [ 00:10:22.638 { 00:10:22.638 "ctrlr_data": { 00:10:22.638 "ana_reporting": false, 00:10:22.638 "cntlid": 1, 00:10:22.638 "firmware_revision": "24.09", 00:10:22.638 "model_number": "SPDK bdev Controller", 00:10:22.638 "multi_ctrlr": true, 00:10:22.638 "oacs": { 00:10:22.638 "firmware": 0, 00:10:22.638 "format": 0, 00:10:22.638 "ns_manage": 0, 00:10:22.638 "security": 0 00:10:22.638 }, 00:10:22.638 "serial_number": "SPDK0", 00:10:22.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:22.638 "vendor_id": "0x8086" 00:10:22.638 }, 00:10:22.638 "ns_data": { 00:10:22.638 "can_share": true, 00:10:22.638 "id": 1 00:10:22.638 }, 00:10:22.638 "trid": { 00:10:22.638 "adrfam": "IPv4", 00:10:22.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:22.638 "traddr": "10.0.0.2", 00:10:22.638 "trsvcid": "4420", 00:10:22.638 "trtype": "TCP" 00:10:22.638 }, 00:10:22.638 "vs": { 00:10:22.638 "nvme_version": "1.3" 00:10:22.638 } 00:10:22.638 } 00:10:22.638 ] 00:10:22.638 }, 00:10:22.638 "memory_domains": [ 00:10:22.638 { 00:10:22.638 "dma_device_id": "system", 00:10:22.638 "dma_device_type": 1 00:10:22.638 } 00:10:22.638 ], 00:10:22.638 "name": "Nvme0n1", 00:10:22.638 "num_blocks": 38912, 00:10:22.638 "product_name": "NVMe disk", 00:10:22.638 "supported_io_types": { 00:10:22.638 "abort": true, 00:10:22.638 "compare": true, 00:10:22.638 "compare_and_write": true, 00:10:22.638 "copy": true, 00:10:22.638 "flush": true, 00:10:22.638 "get_zone_info": false, 00:10:22.638 "nvme_admin": true, 00:10:22.638 "nvme_io": true, 00:10:22.638 "nvme_io_md": false, 00:10:22.638 "nvme_iov_md": false, 00:10:22.638 "read": true, 00:10:22.638 "reset": true, 00:10:22.638 "seek_data": false, 00:10:22.638 "seek_hole": false, 00:10:22.638 "unmap": true, 00:10:22.638 "write": true, 00:10:22.638 "write_zeroes": true, 00:10:22.638 "zcopy": false, 00:10:22.638 "zone_append": false, 00:10:22.638 "zone_management": false 00:10:22.638 }, 00:10:22.638 "uuid": "2afc56ad-b826-4ef3-b577-3bc6fa21bf37", 00:10:22.638 "zoned": false 00:10:22.638 } 00:10:22.638 ] 00:10:22.638 12:54:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74222 00:10:22.638 12:54:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:22.638 12:54:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:22.897 Running I/O for 10 seconds... 00:10:23.830 Latency(us) 00:10:23.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.830 Nvme0n1 : 1.00 7186.00 28.07 0.00 0.00 0.00 0.00 0.00 00:10:23.830 =================================================================================================================== 00:10:23.830 Total : 7186.00 28.07 0.00 0.00 0.00 0.00 0.00 00:10:23.830 00:10:24.763 12:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:24.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.763 Nvme0n1 : 2.00 7127.00 27.84 0.00 0.00 0.00 0.00 0.00 00:10:24.763 =================================================================================================================== 00:10:24.763 Total : 7127.00 27.84 0.00 0.00 0.00 0.00 0.00 00:10:24.763 00:10:25.021 true 00:10:25.021 12:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:25.021 12:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:25.278 12:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:25.278 12:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:25.278 12:54:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74222 00:10:25.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.844 Nvme0n1 : 3.00 7127.00 27.84 0.00 0.00 0.00 0.00 0.00 00:10:25.844 =================================================================================================================== 00:10:25.844 Total : 7127.00 27.84 0.00 0.00 0.00 0.00 0.00 00:10:25.844 00:10:26.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.777 Nvme0n1 : 4.00 7281.50 28.44 0.00 0.00 0.00 0.00 0.00 00:10:26.777 =================================================================================================================== 00:10:26.777 Total : 7281.50 28.44 0.00 0.00 0.00 0.00 0.00 00:10:26.777 00:10:27.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.721 Nvme0n1 : 5.00 7376.40 28.81 0.00 0.00 0.00 0.00 0.00 00:10:27.722 =================================================================================================================== 00:10:27.722 Total : 7376.40 28.81 0.00 0.00 0.00 0.00 0.00 00:10:27.722 00:10:28.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.657 Nvme0n1 : 6.00 7309.33 28.55 0.00 0.00 0.00 0.00 0.00 00:10:28.657 =================================================================================================================== 00:10:28.657 Total : 7309.33 28.55 0.00 0.00 0.00 0.00 0.00 00:10:28.657 00:10:30.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.030 Nvme0n1 : 7.00 7198.71 28.12 0.00 0.00 0.00 0.00 0.00 00:10:30.030 =================================================================================================================== 00:10:30.030 Total : 7198.71 28.12 0.00 0.00 0.00 0.00 0.00 00:10:30.030 00:10:30.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.965 Nvme0n1 : 8.00 6346.38 24.79 0.00 0.00 0.00 0.00 0.00 00:10:30.965 =================================================================================================================== 00:10:30.965 Total : 6346.38 24.79 0.00 0.00 0.00 0.00 0.00 00:10:30.965 00:10:31.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.899 Nvme0n1 : 9.00 5909.89 23.09 0.00 0.00 0.00 0.00 0.00 00:10:31.899 =================================================================================================================== 00:10:31.899 Total : 5909.89 23.09 0.00 0.00 0.00 0.00 0.00 00:10:31.899 00:10:32.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.833 Nvme0n1 : 10.00 6071.50 23.72 0.00 0.00 0.00 0.00 0.00 00:10:32.833 =================================================================================================================== 00:10:32.833 Total : 6071.50 23.72 0.00 0.00 0.00 0.00 0.00 00:10:32.833 00:10:32.833 00:10:32.833 Latency(us) 00:10:32.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.833 Nvme0n1 : 10.00 6082.08 23.76 0.00 0.00 21038.85 6374.87 1631965.56 00:10:32.833 =================================================================================================================== 00:10:32.833 Total : 6082.08 23.76 0.00 0.00 21038.85 6374.87 1631965.56 00:10:32.833 0 00:10:32.833 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74169 00:10:32.833 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74169 ']' 00:10:32.833 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74169 00:10:32.833 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:10:32.833 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:32.834 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74169 00:10:32.834 killing process with pid 74169 00:10:32.834 Received shutdown signal, test time was about 10.000000 seconds 00:10:32.834 00:10:32.834 Latency(us) 00:10:32.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.834 =================================================================================================================== 00:10:32.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:32.834 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:32.834 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:32.834 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74169' 00:10:32.834 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74169 00:10:32.834 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74169 00:10:33.091 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:33.348 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:33.348 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:33.605 12:54:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73564 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73564 00:10:33.863 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73564 Killed "${NVMF_APP[@]}" "$@" 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@485 -- # nvmfpid=74385 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@486 -- # waitforlisten 74385 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74385 ']' 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.863 12:54:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:33.863 [2024-07-15 12:54:46.200598] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:33.863 [2024-07-15 12:54:46.200703] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.120 [2024-07-15 12:54:46.343034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.120 [2024-07-15 12:54:46.401428] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.120 [2024-07-15 12:54:46.401478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.120 [2024-07-15 12:54:46.401490] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.120 [2024-07-15 12:54:46.401498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.120 [2024-07-15 12:54:46.401505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.120 [2024-07-15 12:54:46.401535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.050 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.050 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:35.050 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:10:35.050 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:35.050 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:35.050 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.050 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:35.050 [2024-07-15 12:54:47.492022] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:35.050 [2024-07-15 12:54:47.492265] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:35.050 [2024-07-15 12:54:47.492482] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:35.307 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:35.307 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2afc56ad-b826-4ef3-b577-3bc6fa21bf37 00:10:35.307 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=2afc56ad-b826-4ef3-b577-3bc6fa21bf37 00:10:35.307 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:35.307 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:35.307 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:35.307 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:35.307 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:35.564 12:54:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2afc56ad-b826-4ef3-b577-3bc6fa21bf37 -t 2000 00:10:35.821 [ 00:10:35.821 { 00:10:35.821 "aliases": [ 00:10:35.821 "lvs/lvol" 00:10:35.821 ], 00:10:35.821 "assigned_rate_limits": { 00:10:35.821 "r_mbytes_per_sec": 0, 00:10:35.821 "rw_ios_per_sec": 0, 00:10:35.821 "rw_mbytes_per_sec": 0, 00:10:35.821 "w_mbytes_per_sec": 0 00:10:35.821 }, 00:10:35.821 "block_size": 4096, 00:10:35.821 "claimed": false, 00:10:35.821 "driver_specific": { 00:10:35.821 "lvol": { 00:10:35.821 "base_bdev": "aio_bdev", 00:10:35.821 "clone": false, 00:10:35.821 "esnap_clone": false, 00:10:35.821 "lvol_store_uuid": "428f0577-14e2-48ca-86c7-96ce3c2f34b6", 00:10:35.821 "num_allocated_clusters": 38, 00:10:35.821 "snapshot": false, 00:10:35.821 "thin_provision": false 00:10:35.821 } 00:10:35.821 }, 00:10:35.821 "name": "2afc56ad-b826-4ef3-b577-3bc6fa21bf37", 00:10:35.821 "num_blocks": 38912, 00:10:35.821 "product_name": "Logical Volume", 00:10:35.821 "supported_io_types": { 00:10:35.821 "abort": false, 00:10:35.821 "compare": false, 00:10:35.821 "compare_and_write": false, 00:10:35.821 "copy": false, 00:10:35.821 "flush": false, 00:10:35.821 "get_zone_info": false, 00:10:35.821 "nvme_admin": false, 00:10:35.821 "nvme_io": false, 00:10:35.821 "nvme_io_md": false, 00:10:35.821 "nvme_iov_md": false, 00:10:35.821 "read": true, 00:10:35.821 "reset": true, 00:10:35.821 "seek_data": true, 00:10:35.821 "seek_hole": true, 00:10:35.821 "unmap": true, 00:10:35.821 "write": true, 00:10:35.821 "write_zeroes": true, 00:10:35.821 "zcopy": false, 00:10:35.821 "zone_append": false, 00:10:35.821 "zone_management": false 00:10:35.821 }, 00:10:35.821 "uuid": "2afc56ad-b826-4ef3-b577-3bc6fa21bf37", 00:10:35.821 "zoned": false 00:10:35.821 } 00:10:35.821 ] 00:10:35.821 12:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:35.821 12:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:35.821 12:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:36.077 12:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:36.077 12:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:36.077 12:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:36.333 12:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:36.333 12:54:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:36.591 [2024-07-15 12:54:49.045709] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:36.848 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:37.106 2024/07/15 12:54:49 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:428f0577-14e2-48ca-86c7-96ce3c2f34b6], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:37.106 request: 00:10:37.106 { 00:10:37.106 "method": "bdev_lvol_get_lvstores", 00:10:37.106 "params": { 00:10:37.106 "uuid": "428f0577-14e2-48ca-86c7-96ce3c2f34b6" 00:10:37.106 } 00:10:37.106 } 00:10:37.106 Got JSON-RPC error response 00:10:37.106 GoRPCClient: error on JSON-RPC call 00:10:37.106 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:37.106 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:37.106 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:37.106 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:37.106 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:37.364 aio_bdev 00:10:37.364 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2afc56ad-b826-4ef3-b577-3bc6fa21bf37 00:10:37.364 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=2afc56ad-b826-4ef3-b577-3bc6fa21bf37 00:10:37.364 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:37.364 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:37.364 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:37.364 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:37.364 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:37.622 12:54:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2afc56ad-b826-4ef3-b577-3bc6fa21bf37 -t 2000 00:10:37.879 [ 00:10:37.879 { 00:10:37.879 "aliases": [ 00:10:37.879 "lvs/lvol" 00:10:37.879 ], 00:10:37.879 "assigned_rate_limits": { 00:10:37.879 "r_mbytes_per_sec": 0, 00:10:37.879 "rw_ios_per_sec": 0, 00:10:37.879 "rw_mbytes_per_sec": 0, 00:10:37.879 "w_mbytes_per_sec": 0 00:10:37.879 }, 00:10:37.879 "block_size": 4096, 00:10:37.879 "claimed": false, 00:10:37.879 "driver_specific": { 00:10:37.879 "lvol": { 00:10:37.879 "base_bdev": "aio_bdev", 00:10:37.879 "clone": false, 00:10:37.879 "esnap_clone": false, 00:10:37.879 "lvol_store_uuid": "428f0577-14e2-48ca-86c7-96ce3c2f34b6", 00:10:37.879 "num_allocated_clusters": 38, 00:10:37.879 "snapshot": false, 00:10:37.879 "thin_provision": false 00:10:37.879 } 00:10:37.879 }, 00:10:37.879 "name": "2afc56ad-b826-4ef3-b577-3bc6fa21bf37", 00:10:37.879 "num_blocks": 38912, 00:10:37.879 "product_name": "Logical Volume", 00:10:37.879 "supported_io_types": { 00:10:37.879 "abort": false, 00:10:37.879 "compare": false, 00:10:37.879 "compare_and_write": false, 00:10:37.879 "copy": false, 00:10:37.879 "flush": false, 00:10:37.879 "get_zone_info": false, 00:10:37.879 "nvme_admin": false, 00:10:37.879 "nvme_io": false, 00:10:37.879 "nvme_io_md": false, 00:10:37.879 "nvme_iov_md": false, 00:10:37.879 "read": true, 00:10:37.879 "reset": true, 00:10:37.879 "seek_data": true, 00:10:37.879 "seek_hole": true, 00:10:37.879 "unmap": true, 00:10:37.879 "write": true, 00:10:37.879 "write_zeroes": true, 00:10:37.879 "zcopy": false, 00:10:37.879 "zone_append": false, 00:10:37.879 "zone_management": false 00:10:37.879 }, 00:10:37.879 "uuid": "2afc56ad-b826-4ef3-b577-3bc6fa21bf37", 00:10:37.879 "zoned": false 00:10:37.879 } 00:10:37.879 ] 00:10:37.879 12:54:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:37.879 12:54:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:37.879 12:54:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:38.148 12:54:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:38.148 12:54:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:38.148 12:54:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:38.427 12:54:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:38.427 12:54:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2afc56ad-b826-4ef3-b577-3bc6fa21bf37 00:10:38.685 12:54:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 428f0577-14e2-48ca-86c7-96ce3c2f34b6 00:10:38.943 12:54:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:39.202 12:54:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:39.769 00:10:39.769 real 0m21.462s 00:10:39.769 user 0m43.393s 00:10:39.769 sys 0m7.415s 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.769 ************************************ 00:10:39.769 END TEST lvs_grow_dirty 00:10:39.769 ************************************ 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:39.769 nvmf_trace.0 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # nvmfcleanup 00:10:39.769 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.028 rmmod nvme_tcp 00:10:40.028 rmmod nvme_fabrics 00:10:40.028 rmmod nvme_keyring 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@493 -- # '[' -n 74385 ']' 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@494 -- # killprocess 74385 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74385 ']' 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74385 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74385 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.028 killing process with pid 74385 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74385' 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74385 00:10:40.028 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74385 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@282 -- # remove_spdk_ns 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:10:40.287 00:10:40.287 real 0m42.245s 00:10:40.287 user 1m8.821s 00:10:40.287 sys 0m10.148s 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.287 12:54:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:40.287 ************************************ 00:10:40.287 END TEST nvmf_lvs_grow 00:10:40.287 ************************************ 00:10:40.287 12:54:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:40.287 12:54:52 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:40.287 12:54:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.287 12:54:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.287 12:54:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.287 ************************************ 00:10:40.287 START TEST nvmf_bdev_io_wait 00:10:40.287 ************************************ 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:40.287 * Looking for test storage... 00:10:40.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.287 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.288 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # prepare_net_devs 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # local -g is_hw=no 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # remove_spdk_ns 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # nvmf_veth_init 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.288 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:10:40.546 Cannot find device "nvmf_tgt_br" 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.546 Cannot find device "nvmf_tgt_br2" 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # true 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:10:40.546 Cannot find device "nvmf_tgt_br" 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:10:40.546 Cannot find device "nvmf_tgt_br2" 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:40.546 12:54:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:40.546 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:10:40.546 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:10:40.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:10:40.805 00:10:40.805 --- 10.0.0.2 ping statistics --- 00:10:40.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.805 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:10:40.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:40.805 00:10:40.805 --- 10.0.0.3 ping statistics --- 00:10:40.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.805 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:40.805 00:10:40.805 --- 10.0.0.1 ping statistics --- 00:10:40.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.805 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@437 -- # return 0 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.805 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # nvmfpid=74804 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # waitforlisten 74804 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74804 ']' 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.806 12:54:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:40.806 [2024-07-15 12:54:53.212423] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:40.806 [2024-07-15 12:54:53.212535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.063 [2024-07-15 12:54:53.351593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.064 [2024-07-15 12:54:53.422270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.064 [2024-07-15 12:54:53.422346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.064 [2024-07-15 12:54:53.422360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.064 [2024-07-15 12:54:53.422370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.064 [2024-07-15 12:54:53.422378] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.064 [2024-07-15 12:54:53.422739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.064 [2024-07-15 12:54:53.422877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.064 [2024-07-15 12:54:53.423295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.064 [2024-07-15 12:54:53.423346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.998 [2024-07-15 12:54:54.315004] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.998 Malloc0 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:41.998 [2024-07-15 12:54:54.372165] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74857 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74859 00:10:41.998 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:10:41.998 { 00:10:41.998 "params": { 00:10:41.998 "name": "Nvme$subsystem", 00:10:41.998 "trtype": "$TEST_TRANSPORT", 00:10:41.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.998 "adrfam": "ipv4", 00:10:41.998 "trsvcid": "$NVMF_PORT", 00:10:41.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.998 "hdgst": ${hdgst:-false}, 00:10:41.999 "ddgst": ${ddgst:-false} 00:10:41.999 }, 00:10:41.999 "method": "bdev_nvme_attach_controller" 00:10:41.999 } 00:10:41.999 EOF 00:10:41.999 )") 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74861 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:10:41.999 { 00:10:41.999 "params": { 00:10:41.999 "name": "Nvme$subsystem", 00:10:41.999 "trtype": "$TEST_TRANSPORT", 00:10:41.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.999 "adrfam": "ipv4", 00:10:41.999 "trsvcid": "$NVMF_PORT", 00:10:41.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.999 "hdgst": ${hdgst:-false}, 00:10:41.999 "ddgst": ${ddgst:-false} 00:10:41.999 }, 00:10:41.999 "method": "bdev_nvme_attach_controller" 00:10:41.999 } 00:10:41.999 EOF 00:10:41.999 )") 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74864 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:10:41.999 { 00:10:41.999 "params": { 00:10:41.999 "name": "Nvme$subsystem", 00:10:41.999 "trtype": "$TEST_TRANSPORT", 00:10:41.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.999 "adrfam": "ipv4", 00:10:41.999 "trsvcid": "$NVMF_PORT", 00:10:41.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.999 "hdgst": ${hdgst:-false}, 00:10:41.999 "ddgst": ${ddgst:-false} 00:10:41.999 }, 00:10:41.999 "method": "bdev_nvme_attach_controller" 00:10:41.999 } 00:10:41.999 EOF 00:10:41.999 )") 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:10:41.999 "params": { 00:10:41.999 "name": "Nvme1", 00:10:41.999 "trtype": "tcp", 00:10:41.999 "traddr": "10.0.0.2", 00:10:41.999 "adrfam": "ipv4", 00:10:41.999 "trsvcid": "4420", 00:10:41.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.999 "hdgst": false, 00:10:41.999 "ddgst": false 00:10:41.999 }, 00:10:41.999 "method": "bdev_nvme_attach_controller" 00:10:41.999 }' 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:10:41.999 "params": { 00:10:41.999 "name": "Nvme1", 00:10:41.999 "trtype": "tcp", 00:10:41.999 "traddr": "10.0.0.2", 00:10:41.999 "adrfam": "ipv4", 00:10:41.999 "trsvcid": "4420", 00:10:41.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.999 "hdgst": false, 00:10:41.999 "ddgst": false 00:10:41.999 }, 00:10:41.999 "method": "bdev_nvme_attach_controller" 00:10:41.999 }' 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:10:41.999 "params": { 00:10:41.999 "name": "Nvme1", 00:10:41.999 "trtype": "tcp", 00:10:41.999 "traddr": "10.0.0.2", 00:10:41.999 "adrfam": "ipv4", 00:10:41.999 "trsvcid": "4420", 00:10:41.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.999 "hdgst": false, 00:10:41.999 "ddgst": false 00:10:41.999 }, 00:10:41.999 "method": "bdev_nvme_attach_controller" 00:10:41.999 }' 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:10:41.999 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:10:41.999 { 00:10:41.999 "params": { 00:10:41.999 "name": "Nvme$subsystem", 00:10:41.999 "trtype": "$TEST_TRANSPORT", 00:10:41.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.999 "adrfam": "ipv4", 00:10:41.999 "trsvcid": "$NVMF_PORT", 00:10:41.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.999 "hdgst": ${hdgst:-false}, 00:10:41.999 "ddgst": ${ddgst:-false} 00:10:41.999 }, 00:10:42.000 "method": "bdev_nvme_attach_controller" 00:10:42.000 } 00:10:42.000 EOF 00:10:42.000 )") 00:10:42.000 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:10:42.000 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:10:42.000 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:10:42.000 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:10:42.000 "params": { 00:10:42.000 "name": "Nvme1", 00:10:42.000 "trtype": "tcp", 00:10:42.000 "traddr": "10.0.0.2", 00:10:42.000 "adrfam": "ipv4", 00:10:42.000 "trsvcid": "4420", 00:10:42.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.000 "hdgst": false, 00:10:42.000 "ddgst": false 00:10:42.000 }, 00:10:42.000 "method": "bdev_nvme_attach_controller" 00:10:42.000 }' 00:10:42.000 12:54:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74857 00:10:42.000 [2024-07-15 12:54:54.436023] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:42.000 [2024-07-15 12:54:54.436096] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:42.000 [2024-07-15 12:54:54.442248] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:42.000 [2024-07-15 12:54:54.442478] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:42.000 [2024-07-15 12:54:54.454891] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:42.000 [2024-07-15 12:54:54.454965] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:42.000 [2024-07-15 12:54:54.458416] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:42.000 [2024-07-15 12:54:54.458797] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:42.258 [2024-07-15 12:54:54.624630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.258 [2024-07-15 12:54:54.666132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.258 [2024-07-15 12:54:54.680526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:42.258 [2024-07-15 12:54:54.714034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.258 [2024-07-15 12:54:54.721031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:42.516 [2024-07-15 12:54:54.755166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.516 [2024-07-15 12:54:54.770420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:42.516 Running I/O for 1 seconds... 00:10:42.516 [2024-07-15 12:54:54.820751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:42.516 Running I/O for 1 seconds... 00:10:42.516 Running I/O for 1 seconds... 00:10:42.516 Running I/O for 1 seconds... 00:10:43.450 00:10:43.450 Latency(us) 00:10:43.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.450 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:43.450 Nvme1n1 : 1.02 5918.94 23.12 0.00 0.00 21528.85 7030.23 38606.66 00:10:43.450 =================================================================================================================== 00:10:43.450 Total : 5918.94 23.12 0.00 0.00 21528.85 7030.23 38606.66 00:10:43.450 00:10:43.450 Latency(us) 00:10:43.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.450 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:43.450 Nvme1n1 : 1.00 171348.37 669.33 0.00 0.00 744.04 283.00 1474.56 00:10:43.450 =================================================================================================================== 00:10:43.450 Total : 171348.37 669.33 0.00 0.00 744.04 283.00 1474.56 00:10:43.450 00:10:43.450 Latency(us) 00:10:43.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.450 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:43.450 Nvme1n1 : 1.01 9263.12 36.18 0.00 0.00 13759.95 6940.86 23712.12 00:10:43.450 =================================================================================================================== 00:10:43.450 Total : 9263.12 36.18 0.00 0.00 13759.95 6940.86 23712.12 00:10:43.707 00:10:43.707 Latency(us) 00:10:43.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.707 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:43.707 Nvme1n1 : 1.01 5690.58 22.23 0.00 0.00 22401.29 7238.75 46232.67 00:10:43.707 =================================================================================================================== 00:10:43.707 Total : 5690.58 22.23 0.00 0.00 22401.29 7238.75 46232.67 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74859 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74861 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74864 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # nvmfcleanup 00:10:43.707 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.965 rmmod nvme_tcp 00:10:43.965 rmmod nvme_fabrics 00:10:43.965 rmmod nvme_keyring 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # '[' -n 74804 ']' 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # killprocess 74804 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74804 ']' 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74804 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74804 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:43.965 killing process with pid 74804 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74804' 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74804 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74804 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@282 -- # remove_spdk_ns 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.965 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.225 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:10:44.225 00:10:44.225 real 0m3.818s 00:10:44.225 user 0m16.694s 00:10:44.225 sys 0m1.724s 00:10:44.225 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.225 12:54:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:44.225 ************************************ 00:10:44.225 END TEST nvmf_bdev_io_wait 00:10:44.225 ************************************ 00:10:44.225 12:54:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:44.225 12:54:56 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:44.225 12:54:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:44.225 12:54:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.225 12:54:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.225 ************************************ 00:10:44.225 START TEST nvmf_queue_depth 00:10:44.225 ************************************ 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:44.225 * Looking for test storage... 00:10:44.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.225 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@452 -- # prepare_net_devs 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # local -g is_hw=no 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # remove_spdk_ns 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@436 -- # nvmf_veth_init 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:44.225 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:10:44.226 Cannot find device "nvmf_tgt_br" 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.226 Cannot find device "nvmf_tgt_br2" 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # true 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:10:44.226 Cannot find device "nvmf_tgt_br" 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:10:44.226 Cannot find device "nvmf_tgt_br2" 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:44.226 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.484 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:10:44.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:44.485 00:10:44.485 --- 10.0.0.2 ping statistics --- 00:10:44.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.485 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:10:44.485 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.485 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:10:44.485 00:10:44.485 --- 10.0.0.3 ping statistics --- 00:10:44.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.485 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:44.485 00:10:44.485 --- 10.0.0.1 ping statistics --- 00:10:44.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.485 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@437 -- # return 0 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@485 -- # nvmfpid=75088 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@486 -- # waitforlisten 75088 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75088 ']' 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.485 12:54:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:44.743 [2024-07-15 12:54:57.004918] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:44.743 [2024-07-15 12:54:57.005011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.743 [2024-07-15 12:54:57.137934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.743 [2024-07-15 12:54:57.197720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.743 [2024-07-15 12:54:57.197791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.743 [2024-07-15 12:54:57.197805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.743 [2024-07-15 12:54:57.197813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.743 [2024-07-15 12:54:57.197821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.743 [2024-07-15 12:54:57.197853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:45.677 [2024-07-15 12:54:58.082733] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:45.677 Malloc0 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:45.677 [2024-07-15 12:54:58.137205] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75144 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:45.677 12:54:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:45.936 12:54:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75144 /var/tmp/bdevperf.sock 00:10:45.936 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75144 ']' 00:10:45.936 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:45.936 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.936 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:45.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:45.936 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.936 12:54:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:45.936 [2024-07-15 12:54:58.199204] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:45.936 [2024-07-15 12:54:58.199303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75144 ] 00:10:45.936 [2024-07-15 12:54:58.340327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.194 [2024-07-15 12:54:58.411325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.760 12:54:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:46.760 12:54:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:46.760 12:54:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:46.760 12:54:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.760 12:54:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:47.018 NVMe0n1 00:10:47.018 12:54:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.018 12:54:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:47.018 Running I/O for 10 seconds... 00:10:57.081 00:10:57.082 Latency(us) 00:10:57.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.082 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:57.082 Verification LBA range: start 0x0 length 0x4000 00:10:57.082 NVMe0n1 : 10.10 8516.32 33.27 0.00 0.00 119742.63 27644.28 82456.20 00:10:57.082 =================================================================================================================== 00:10:57.082 Total : 8516.32 33.27 0.00 0.00 119742.63 27644.28 82456.20 00:10:57.082 0 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75144 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75144 ']' 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75144 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75144 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75144' 00:10:57.082 killing process with pid 75144 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75144 00:10:57.082 Received shutdown signal, test time was about 10.000000 seconds 00:10:57.082 00:10:57.082 Latency(us) 00:10:57.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.082 =================================================================================================================== 00:10:57.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:57.082 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75144 00:10:57.339 12:55:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:57.339 12:55:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:57.339 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # nvmfcleanup 00:10:57.339 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.340 rmmod nvme_tcp 00:10:57.340 rmmod nvme_fabrics 00:10:57.340 rmmod nvme_keyring 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@493 -- # '[' -n 75088 ']' 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@494 -- # killprocess 75088 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75088 ']' 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75088 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.340 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75088 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:57.597 killing process with pid 75088 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75088' 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75088 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75088 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@282 -- # remove_spdk_ns 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.597 12:55:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.597 12:55:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:10:57.597 00:10:57.597 real 0m13.520s 00:10:57.597 user 0m23.641s 00:10:57.597 sys 0m1.895s 00:10:57.597 12:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.597 12:55:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.597 ************************************ 00:10:57.597 END TEST nvmf_queue_depth 00:10:57.597 ************************************ 00:10:57.597 12:55:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:57.597 12:55:10 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:57.598 12:55:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:57.598 12:55:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.598 12:55:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.856 ************************************ 00:10:57.856 START TEST nvmf_target_multipath 00:10:57.856 ************************************ 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:57.856 * Looking for test storage... 00:10:57.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.856 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@452 -- # prepare_net_devs 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # local -g is_hw=no 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # remove_spdk_ns 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@436 -- # nvmf_veth_init 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:10:57.856 Cannot find device "nvmf_tgt_br" 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.856 Cannot find device "nvmf_tgt_br2" 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # true 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:10:57.856 Cannot find device "nvmf_tgt_br" 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:57.856 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:10:57.856 Cannot find device "nvmf_tgt_br2" 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.857 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.857 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:57.857 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:10:58.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:10:58.114 00:10:58.114 --- 10.0.0.2 ping statistics --- 00:10:58.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.114 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:58.114 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:10:58.115 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:58.115 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:58.115 00:10:58.115 --- 10.0.0.3 ping statistics --- 00:10:58.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.115 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:58.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:10:58.115 00:10:58.115 --- 10.0.0.1 ping statistics --- 00:10:58.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.115 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@437 -- # return 0 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@485 -- # nvmfpid=75472 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@486 -- # waitforlisten 75472 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75472 ']' 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.115 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:58.115 [2024-07-15 12:55:10.579885] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:10:58.115 [2024-07-15 12:55:10.579992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.371 [2024-07-15 12:55:10.716554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.371 [2024-07-15 12:55:10.776008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.371 [2024-07-15 12:55:10.776063] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.371 [2024-07-15 12:55:10.776083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.371 [2024-07-15 12:55:10.776092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.371 [2024-07-15 12:55:10.776099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.371 [2024-07-15 12:55:10.776199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.371 [2024-07-15 12:55:10.776903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.371 [2024-07-15 12:55:10.776985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.371 [2024-07-15 12:55:10.776994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.629 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.629 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:58.629 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:10:58.629 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:58.629 12:55:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:58.629 12:55:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.629 12:55:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.909 [2024-07-15 12:55:11.148667] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.909 12:55:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:59.184 Malloc0 00:10:59.184 12:55:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:59.441 12:55:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.698 12:55:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.955 [2024-07-15 12:55:12.228438] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.955 12:55:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:00.212 [2024-07-15 12:55:12.592842] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:00.213 12:55:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:00.469 12:55:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:00.727 12:55:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.727 12:55:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:11:00.727 12:55:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.727 12:55:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:00.727 12:55:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:02.642 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:02.643 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:02.643 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:02.643 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75596 00:11:02.643 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:02.643 12:55:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:02.643 [global] 00:11:02.643 thread=1 00:11:02.643 invalidate=1 00:11:02.643 rw=randrw 00:11:02.643 time_based=1 00:11:02.643 runtime=6 00:11:02.643 ioengine=libaio 00:11:02.643 direct=1 00:11:02.643 bs=4096 00:11:02.643 iodepth=128 00:11:02.643 norandommap=0 00:11:02.643 numjobs=1 00:11:02.643 00:11:02.643 verify_dump=1 00:11:02.643 verify_backlog=512 00:11:02.643 verify_state_save=0 00:11:02.643 do_verify=1 00:11:02.643 verify=crc32c-intel 00:11:02.643 [job0] 00:11:02.643 filename=/dev/nvme0n1 00:11:02.643 Could not set queue depth (nvme0n1) 00:11:02.900 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.900 fio-3.35 00:11:02.900 Starting 1 thread 00:11:03.834 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:04.092 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:04.350 12:55:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:05.282 12:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:05.282 12:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:05.282 12:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:05.282 12:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:05.539 12:55:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:05.796 12:55:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:06.744 12:55:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:06.744 12:55:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:06.744 12:55:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:06.744 12:55:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75596 00:11:09.271 00:11:09.271 job0: (groupid=0, jobs=1): err= 0: pid=75617: Mon Jul 15 12:55:21 2024 00:11:09.271 read: IOPS=10.8k, BW=42.1MiB/s (44.2MB/s)(253MiB/6006msec) 00:11:09.271 slat (usec): min=2, max=5150, avg=52.97, stdev=239.80 00:11:09.271 clat (usec): min=1047, max=15792, avg=8064.52, stdev=1289.00 00:11:09.271 lat (usec): min=1063, max=15804, avg=8117.49, stdev=1298.78 00:11:09.271 clat percentiles (usec): 00:11:09.271 | 1.00th=[ 4883], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 7242], 00:11:09.271 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:11:09.271 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10421], 00:11:09.271 | 99.00th=[11994], 99.50th=[12649], 99.90th=[14353], 99.95th=[14615], 00:11:09.271 | 99.99th=[15401] 00:11:09.271 bw ( KiB/s): min= 9680, max=29064, per=52.91%, avg=22826.91, stdev=5837.89, samples=11 00:11:09.271 iops : min= 2420, max= 7266, avg=5706.73, stdev=1459.47, samples=11 00:11:09.271 write: IOPS=6380, BW=24.9MiB/s (26.1MB/s)(134MiB/5375msec); 0 zone resets 00:11:09.271 slat (usec): min=4, max=2760, avg=64.37, stdev=162.44 00:11:09.271 clat (usec): min=729, max=14893, avg=6958.33, stdev=1084.85 00:11:09.271 lat (usec): min=762, max=14917, avg=7022.70, stdev=1089.10 00:11:09.271 clat percentiles (usec): 00:11:09.271 | 1.00th=[ 3851], 5.00th=[ 5080], 10.00th=[ 5866], 20.00th=[ 6325], 00:11:09.271 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:11:09.271 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 8029], 95.00th=[ 8586], 00:11:09.271 | 99.00th=[10159], 99.50th=[10945], 99.90th=[12649], 99.95th=[13042], 00:11:09.271 | 99.99th=[14615] 00:11:09.271 bw ( KiB/s): min= 9920, max=28288, per=89.61%, avg=22869.09, stdev=5491.26, samples=11 00:11:09.271 iops : min= 2480, max= 7072, avg=5717.27, stdev=1372.81, samples=11 00:11:09.271 lat (usec) : 750=0.01%, 1000=0.01% 00:11:09.271 lat (msec) : 2=0.02%, 4=0.60%, 10=94.50%, 20=4.87% 00:11:09.271 cpu : usr=6.24%, sys=22.21%, ctx=6368, majf=0, minf=121 00:11:09.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:09.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.271 issued rwts: total=64782,34293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.271 00:11:09.271 Run status group 0 (all jobs): 00:11:09.271 READ: bw=42.1MiB/s (44.2MB/s), 42.1MiB/s-42.1MiB/s (44.2MB/s-44.2MB/s), io=253MiB (265MB), run=6006-6006msec 00:11:09.271 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=134MiB (140MB), run=5375-5375msec 00:11:09.271 00:11:09.271 Disk stats (read/write): 00:11:09.271 nvme0n1: ios=63841/33720, merge=0/0, ticks=480789/218451, in_queue=699240, util=98.51% 00:11:09.271 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:09.271 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:11:09.529 12:55:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:10.937 12:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:10.937 12:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:10.937 12:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:10.937 12:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:10.937 12:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75751 00:11:10.937 12:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:10.937 12:55:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:10.937 [global] 00:11:10.937 thread=1 00:11:10.937 invalidate=1 00:11:10.937 rw=randrw 00:11:10.937 time_based=1 00:11:10.937 runtime=6 00:11:10.937 ioengine=libaio 00:11:10.937 direct=1 00:11:10.937 bs=4096 00:11:10.937 iodepth=128 00:11:10.937 norandommap=0 00:11:10.937 numjobs=1 00:11:10.937 00:11:10.937 verify_dump=1 00:11:10.937 verify_backlog=512 00:11:10.937 verify_state_save=0 00:11:10.937 do_verify=1 00:11:10.937 verify=crc32c-intel 00:11:10.937 [job0] 00:11:10.937 filename=/dev/nvme0n1 00:11:10.937 Could not set queue depth (nvme0n1) 00:11:10.937 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.937 fio-3.35 00:11:10.937 Starting 1 thread 00:11:11.870 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:11.870 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:12.128 12:55:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:13.501 12:55:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:13.501 12:55:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:13.501 12:55:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:13.501 12:55:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:13.501 12:55:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:13.759 12:55:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:14.694 12:55:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:14.694 12:55:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:14.694 12:55:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:14.694 12:55:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75751 00:11:17.224 00:11:17.224 job0: (groupid=0, jobs=1): err= 0: pid=75772: Mon Jul 15 12:55:29 2024 00:11:17.224 read: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(279MiB/6007msec) 00:11:17.224 slat (usec): min=5, max=5522, avg=42.73, stdev=205.73 00:11:17.224 clat (usec): min=759, max=19238, avg=7414.34, stdev=1753.09 00:11:17.224 lat (usec): min=786, max=19250, avg=7457.07, stdev=1770.96 00:11:17.224 clat percentiles (usec): 00:11:17.224 | 1.00th=[ 3621], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5800], 00:11:17.224 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7767], 00:11:17.224 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[10290], 00:11:17.224 | 99.00th=[11994], 99.50th=[12649], 99.90th=[14877], 99.95th=[17171], 00:11:17.224 | 99.99th=[18220] 00:11:17.224 bw ( KiB/s): min= 4408, max=44136, per=53.07%, avg=25272.67, stdev=9904.69, samples=12 00:11:17.224 iops : min= 1102, max=11034, avg=6318.17, stdev=2476.17, samples=12 00:11:17.224 write: IOPS=7195, BW=28.1MiB/s (29.5MB/s)(148MiB/5275msec); 0 zone resets 00:11:17.224 slat (usec): min=13, max=4105, avg=54.30, stdev=134.20 00:11:17.224 clat (usec): min=203, max=17642, avg=6114.97, stdev=1784.14 00:11:17.224 lat (usec): min=296, max=17770, avg=6169.27, stdev=1799.87 00:11:17.224 clat percentiles (usec): 00:11:17.224 | 1.00th=[ 2704], 5.00th=[ 3294], 10.00th=[ 3687], 20.00th=[ 4228], 00:11:17.224 | 30.00th=[ 4817], 40.00th=[ 5997], 50.00th=[ 6521], 60.00th=[ 6849], 00:11:17.224 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8029], 95.00th=[ 8717], 00:11:17.224 | 99.00th=[10290], 99.50th=[11207], 99.90th=[14222], 99.95th=[15533], 00:11:17.224 | 99.99th=[17433] 00:11:17.224 bw ( KiB/s): min= 4632, max=43360, per=87.78%, avg=25263.33, stdev=9692.19, samples=12 00:11:17.224 iops : min= 1158, max=10840, avg=6315.83, stdev=2423.05, samples=12 00:11:17.224 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:17.224 lat (msec) : 2=0.10%, 4=6.62%, 10=88.92%, 20=4.33% 00:11:17.224 cpu : usr=6.06%, sys=26.62%, ctx=7443, majf=0, minf=121 00:11:17.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:17.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.224 issued rwts: total=71519,37954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.224 00:11:17.224 Run status group 0 (all jobs): 00:11:17.224 READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=279MiB (293MB), run=6007-6007msec 00:11:17.224 WRITE: bw=28.1MiB/s (29.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=148MiB (155MB), run=5275-5275msec 00:11:17.224 00:11:17.224 Disk stats (read/write): 00:11:17.224 nvme0n1: ios=70598/37306, merge=0/0, ticks=484144/208056, in_queue=692200, util=98.63% 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # nvmfcleanup 00:11:17.224 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.483 rmmod nvme_tcp 00:11:17.483 rmmod nvme_fabrics 00:11:17.483 rmmod nvme_keyring 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@493 -- # '[' -n 75472 ']' 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@494 -- # killprocess 75472 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75472 ']' 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75472 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75472 00:11:17.483 killing process with pid 75472 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75472' 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75472 00:11:17.483 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75472 00:11:17.741 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:11:17.741 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:11:17.741 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:11:17.741 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.741 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@282 -- # remove_spdk_ns 00:11:17.742 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.742 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.742 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.742 12:55:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:11:17.742 00:11:17.742 real 0m19.927s 00:11:17.742 user 1m18.574s 00:11:17.742 sys 0m6.424s 00:11:17.742 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.742 12:55:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:17.742 ************************************ 00:11:17.742 END TEST nvmf_target_multipath 00:11:17.742 ************************************ 00:11:17.742 12:55:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:17.742 12:55:30 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:17.742 12:55:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:17.742 12:55:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.742 12:55:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.742 ************************************ 00:11:17.742 START TEST nvmf_zcopy 00:11:17.742 ************************************ 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:17.742 * Looking for test storage... 00:11:17.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.742 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@452 -- # prepare_net_devs 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # local -g is_hw=no 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # remove_spdk_ns 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@436 -- # nvmf_veth_init 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:11:17.742 Cannot find device "nvmf_tgt_br" 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.742 Cannot find device "nvmf_tgt_br2" 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # true 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:11:17.742 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:11:18.001 Cannot find device "nvmf_tgt_br" 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:11:18.001 Cannot find device "nvmf_tgt_br2" 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:18.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:18.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:18.001 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.259 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:11:18.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:18.260 00:11:18.260 --- 10.0.0.2 ping statistics --- 00:11:18.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.260 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:11:18.260 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.260 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:11:18.260 00:11:18.260 --- 10.0.0.3 ping statistics --- 00:11:18.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.260 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:18.260 00:11:18.260 --- 10.0.0.1 ping statistics --- 00:11:18.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.260 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@437 -- # return 0 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@485 -- # nvmfpid=76053 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@486 -- # waitforlisten 76053 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 76053 ']' 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.260 12:55:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.260 [2024-07-15 12:55:30.589545] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:11:18.260 [2024-07-15 12:55:30.589655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.518 [2024-07-15 12:55:30.732082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.518 [2024-07-15 12:55:30.804458] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.518 [2024-07-15 12:55:30.804516] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.518 [2024-07-15 12:55:30.804531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.518 [2024-07-15 12:55:30.804541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.518 [2024-07-15 12:55:30.804550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.518 [2024-07-15 12:55:30.804583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.458 [2024-07-15 12:55:31.647373] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.458 [2024-07-15 12:55:31.663497] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.458 malloc0 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@536 -- # config=() 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@536 -- # local subsystem config 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:11:19.458 { 00:11:19.458 "params": { 00:11:19.458 "name": "Nvme$subsystem", 00:11:19.458 "trtype": "$TEST_TRANSPORT", 00:11:19.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:19.458 "adrfam": "ipv4", 00:11:19.458 "trsvcid": "$NVMF_PORT", 00:11:19.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:19.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:19.458 "hdgst": ${hdgst:-false}, 00:11:19.458 "ddgst": ${ddgst:-false} 00:11:19.458 }, 00:11:19.458 "method": "bdev_nvme_attach_controller" 00:11:19.458 } 00:11:19.458 EOF 00:11:19.458 )") 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # cat 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@560 -- # jq . 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@561 -- # IFS=, 00:11:19.458 12:55:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:11:19.458 "params": { 00:11:19.458 "name": "Nvme1", 00:11:19.458 "trtype": "tcp", 00:11:19.458 "traddr": "10.0.0.2", 00:11:19.458 "adrfam": "ipv4", 00:11:19.458 "trsvcid": "4420", 00:11:19.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:19.458 "hdgst": false, 00:11:19.458 "ddgst": false 00:11:19.458 }, 00:11:19.458 "method": "bdev_nvme_attach_controller" 00:11:19.458 }' 00:11:19.458 [2024-07-15 12:55:31.744963] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:11:19.458 [2024-07-15 12:55:31.745044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76104 ] 00:11:19.458 [2024-07-15 12:55:31.879035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.717 [2024-07-15 12:55:31.936774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.717 Running I/O for 10 seconds... 00:11:29.733 00:11:29.733 Latency(us) 00:11:29.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.733 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:29.733 Verification LBA range: start 0x0 length 0x1000 00:11:29.733 Nvme1n1 : 10.01 5933.28 46.35 0.00 0.00 21502.33 1005.38 31218.97 00:11:29.733 =================================================================================================================== 00:11:29.733 Total : 5933.28 46.35 0.00 0.00 21502.33 1005.38 31218.97 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76221 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@536 -- # config=() 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@536 -- # local subsystem config 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:11:29.991 { 00:11:29.991 "params": { 00:11:29.991 "name": "Nvme$subsystem", 00:11:29.991 "trtype": "$TEST_TRANSPORT", 00:11:29.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.991 "adrfam": "ipv4", 00:11:29.991 "trsvcid": "$NVMF_PORT", 00:11:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.991 "hdgst": ${hdgst:-false}, 00:11:29.991 "ddgst": ${ddgst:-false} 00:11:29.991 }, 00:11:29.991 "method": "bdev_nvme_attach_controller" 00:11:29.991 } 00:11:29.991 EOF 00:11:29.991 )") 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # cat 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@560 -- # jq . 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@561 -- # IFS=, 00:11:29.991 12:55:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:11:29.991 "params": { 00:11:29.991 "name": "Nvme1", 00:11:29.991 "trtype": "tcp", 00:11:29.991 "traddr": "10.0.0.2", 00:11:29.991 "adrfam": "ipv4", 00:11:29.991 "trsvcid": "4420", 00:11:29.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:29.991 "hdgst": false, 00:11:29.991 "ddgst": false 00:11:29.991 }, 00:11:29.991 "method": "bdev_nvme_attach_controller" 00:11:29.991 }' 00:11:29.991 [2024-07-15 12:55:42.259864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.991 [2024-07-15 12:55:42.259911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.991 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.991 [2024-07-15 12:55:42.271864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.991 [2024-07-15 12:55:42.271902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.991 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.991 [2024-07-15 12:55:42.283864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.991 [2024-07-15 12:55:42.283903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.991 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.991 [2024-07-15 12:55:42.293378] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:11:29.991 [2024-07-15 12:55:42.293458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76221 ] 00:11:29.991 [2024-07-15 12:55:42.295878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.991 [2024-07-15 12:55:42.295913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.991 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.991 [2024-07-15 12:55:42.307873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.991 [2024-07-15 12:55:42.307912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.991 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.991 [2024-07-15 12:55:42.319885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.991 [2024-07-15 12:55:42.319927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.991 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.991 [2024-07-15 12:55:42.331867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.991 [2024-07-15 12:55:42.331899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.991 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.991 [2024-07-15 12:55:42.343885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.991 [2024-07-15 12:55:42.343925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.992 [2024-07-15 12:55:42.355893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.992 [2024-07-15 12:55:42.355930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.992 [2024-07-15 12:55:42.367922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.992 [2024-07-15 12:55:42.367966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.992 [2024-07-15 12:55:42.379899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.992 [2024-07-15 12:55:42.379937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.992 [2024-07-15 12:55:42.387877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.992 [2024-07-15 12:55:42.387910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.992 [2024-07-15 12:55:42.399892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.992 [2024-07-15 12:55:42.399926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.992 [2024-07-15 12:55:42.411888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.992 [2024-07-15 12:55:42.411922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.992 [2024-07-15 12:55:42.423889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.992 [2024-07-15 12:55:42.423931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 [2024-07-15 12:55:42.428007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.992 [2024-07-15 12:55:42.435920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.992 [2024-07-15 12:55:42.435961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.992 [2024-07-15 12:55:42.447916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.992 [2024-07-15 12:55:42.447958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.992 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.459910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.459939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.471951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.471997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.483915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.483951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.495928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.495965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 [2024-07-15 12:55:42.498273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.503899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.503927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.511917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.511950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.519928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.519962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.531951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.531992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.543944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.543980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.555962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.556003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.563940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.563979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.571927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.571957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.579936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.579973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.587962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.587997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.595946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.595978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.603955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.603987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.611957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.250 [2024-07-15 12:55:42.611988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.250 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.250 [2024-07-15 12:55:42.619967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.619999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.251 [2024-07-15 12:55:42.627962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.627993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.251 [2024-07-15 12:55:42.635960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.635989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.251 [2024-07-15 12:55:42.643973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.644008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.251 Running I/O for 5 seconds... 00:11:30.251 [2024-07-15 12:55:42.651969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.651995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.251 [2024-07-15 12:55:42.663657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.663695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.251 [2024-07-15 12:55:42.673289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.673326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.251 [2024-07-15 12:55:42.685284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.685323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.251 [2024-07-15 12:55:42.696338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.696376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.251 [2024-07-15 12:55:42.707393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.251 [2024-07-15 12:55:42.707431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.251 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.718406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.718443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.730823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.730863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.740623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.740661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.752047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.752086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.763079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.763117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.774165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.774202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.786653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.786692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.797041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.797078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.807683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.807722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.818380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.818417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.828994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.829032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.843319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.843358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.853689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.853727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.864582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.864619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.877148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.877185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.509 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.509 [2024-07-15 12:55:42.887681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.509 [2024-07-15 12:55:42.887720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.510 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.510 [2024-07-15 12:55:42.898570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.510 [2024-07-15 12:55:42.898609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.510 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.510 [2024-07-15 12:55:42.911205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.510 [2024-07-15 12:55:42.911246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.510 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.510 [2024-07-15 12:55:42.920884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.510 [2024-07-15 12:55:42.920920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.510 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.510 [2024-07-15 12:55:42.932476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.510 [2024-07-15 12:55:42.932517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.510 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.510 [2024-07-15 12:55:42.943754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.510 [2024-07-15 12:55:42.943805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.510 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.510 [2024-07-15 12:55:42.958142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.510 [2024-07-15 12:55:42.958183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.510 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.510 [2024-07-15 12:55:42.968456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.510 [2024-07-15 12:55:42.968496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.510 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:42.979423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:42.979459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:42.992409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:42.992448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.002628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.002666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.013318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.013357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.024151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.024188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.036809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.036846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.046967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.047004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.057583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.057620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.073252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.073289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.083648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.083684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.094268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.094307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.104852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.104887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.115814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.768 [2024-07-15 12:55:43.115848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.768 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.768 [2024-07-15 12:55:43.126990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.127028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.769 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.769 [2024-07-15 12:55:43.138435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.138472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.769 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.769 [2024-07-15 12:55:43.151095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.151132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.769 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.769 [2024-07-15 12:55:43.161427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.161463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.769 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.769 [2024-07-15 12:55:43.172357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.172393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.769 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.769 [2024-07-15 12:55:43.188191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.188230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.769 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.769 [2024-07-15 12:55:43.197658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.197695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.769 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.769 [2024-07-15 12:55:43.209165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.209203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.769 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.769 [2024-07-15 12:55:43.219843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.219880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.769 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:30.769 [2024-07-15 12:55:43.235243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.769 [2024-07-15 12:55:43.235281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.027 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.027 [2024-07-15 12:55:43.250404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.027 [2024-07-15 12:55:43.250440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.027 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.260527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.260565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.271340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.271376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.282374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.282411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.293173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.293211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.309695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.309735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.319523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.319561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.330664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.330701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.341313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.341350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.352284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.352323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.363079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.363118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.376009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.376046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.386648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.386684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.397308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.397347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.408286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.408328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.423062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.423102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.440715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.440754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.451281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.451315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.462161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.462197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.472874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.472912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.028 [2024-07-15 12:55:43.487522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.028 [2024-07-15 12:55:43.487583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.028 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.285 [2024-07-15 12:55:43.497197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.285 [2024-07-15 12:55:43.497252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.285 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.285 [2024-07-15 12:55:43.513247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.285 [2024-07-15 12:55:43.513308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.285 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.285 [2024-07-15 12:55:43.529159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.285 [2024-07-15 12:55:43.529220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.285 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.285 [2024-07-15 12:55:43.546229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.285 [2024-07-15 12:55:43.546293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.285 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.285 [2024-07-15 12:55:43.562615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.285 [2024-07-15 12:55:43.562673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.572880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.572931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.584422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.584493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.600266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.600337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.617091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.617164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.634236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.634301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.650852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.650912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.661402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.661460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.676396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.676457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.693587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.693645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.709739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.709813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.725664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.725718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.736345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.736408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.286 [2024-07-15 12:55:43.747886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.286 [2024-07-15 12:55:43.747936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.286 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.543 [2024-07-15 12:55:43.759611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.543 [2024-07-15 12:55:43.759673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.543 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.543 [2024-07-15 12:55:43.775898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.543 [2024-07-15 12:55:43.775976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.792974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.793045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.810454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.810540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.826403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.826478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.841731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.841812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.858635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.858705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.873884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.873972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.889527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.889613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.905530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.905629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.921059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.921144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.931483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.931540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.944043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.944116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.959641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.959712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.977064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.977122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.987919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.987971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.544 [2024-07-15 12:55:43.999566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.544 [2024-07-15 12:55:43.999631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.544 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.801 [2024-07-15 12:55:44.015503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.801 [2024-07-15 12:55:44.015567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.801 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.801 [2024-07-15 12:55:44.031842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.801 [2024-07-15 12:55:44.031892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.801 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.801 [2024-07-15 12:55:44.047949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.801 [2024-07-15 12:55:44.048028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.059661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.059712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.074725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.074795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.093038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.093096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.109824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.109876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.120283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.120329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.132681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.132730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.148327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.148379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.164706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.164756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.180642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.180699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.190999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.191068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.205941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.205999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.216266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.216339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.227963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.228018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.242974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.243037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:31.802 [2024-07-15 12:55:44.254198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.802 [2024-07-15 12:55:44.254245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.802 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.060 [2024-07-15 12:55:44.269953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.060 [2024-07-15 12:55:44.270003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.060 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.060 [2024-07-15 12:55:44.286031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.060 [2024-07-15 12:55:44.286084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.060 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.060 [2024-07-15 12:55:44.302915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.060 [2024-07-15 12:55:44.302978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.060 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.060 [2024-07-15 12:55:44.320867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.060 [2024-07-15 12:55:44.320926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.060 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.060 [2024-07-15 12:55:44.334828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.060 [2024-07-15 12:55:44.334891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.351423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.351481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.366747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.366826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.383391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.383450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.393953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.394015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.406473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.406533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.421887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.421947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.438033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.438085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.448939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.448985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.465059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.465120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.480324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.480374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.495282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.495342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.512119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.512179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.061 [2024-07-15 12:55:44.522605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.061 [2024-07-15 12:55:44.522655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.061 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.537557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.537597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.554748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.554808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.572275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.572358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.588446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.588494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.607486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.607558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.618046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.618091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.629685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.629741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.644710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.644758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.661371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.661427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.677024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.677093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.687911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.687952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.703311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.703361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.719875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.719944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.736503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.736556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.746592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.746635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.758344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.758399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.774523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.774576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.320 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.320 [2024-07-15 12:55:44.785681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.320 [2024-07-15 12:55:44.785738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.800806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.800877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.817326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.817375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.833607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.833663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.850028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.850085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.860361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.860407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.875573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.875622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.890457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.890515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.907530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.907589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.918244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.918297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.933628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.933683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.950698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.950760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.966343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.966415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.983640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.983715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:44.998549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:44.998613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:45.017005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:45.017061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:45.032305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:45.032352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.579 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.579 [2024-07-15 12:55:45.043521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.579 [2024-07-15 12:55:45.043573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.059117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.059168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.075263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.075318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.092097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.092155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.103186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.103233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.114547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.114594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.126226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.126275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.137940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.138002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.153456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.153514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.168842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.168899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.184476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.184531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.194873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.194922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.209503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.209556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.225243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.225294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.837 [2024-07-15 12:55:45.240312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.837 [2024-07-15 12:55:45.240361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.837 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.838 [2024-07-15 12:55:45.250336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.838 [2024-07-15 12:55:45.250388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.838 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.838 [2024-07-15 12:55:45.263013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.838 [2024-07-15 12:55:45.263060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.838 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.838 [2024-07-15 12:55:45.278661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.838 [2024-07-15 12:55:45.278713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.838 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:32.838 [2024-07-15 12:55:45.295203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.838 [2024-07-15 12:55:45.295254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.838 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.311485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.311536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.327725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.327794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.345687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.345737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.356531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.356576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.369312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.369359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.385881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.385928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.396536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.396580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.411663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.411717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.421858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.421905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.437387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.437454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.453561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.453616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.470477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.470540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.487468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.487536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.502976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.503038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.513045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.513095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.525385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.525434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.540859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.540926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.558276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.558359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.577371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.577441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.596047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.596108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.174 [2024-07-15 12:55:45.611715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.174 [2024-07-15 12:55:45.611798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.174 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.627914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.627953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.645121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.645161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.661055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.661099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.677548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.677590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.693130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.693180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.708794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.708845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.718410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.718466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.734486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.734538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.750011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.750056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.759967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.760002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.774344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.774386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.784109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.784144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.799282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.799334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.817347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.817400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.832914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.832963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.843004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.843045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.858050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.858092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.873257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.873313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.434 [2024-07-15 12:55:45.889437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.434 [2024-07-15 12:55:45.889492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.434 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.692 [2024-07-15 12:55:45.905587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:45.905628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:45.915609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:45.915649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:45.929711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:45.929751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:45.944802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:45.944847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:45.955692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:45.955733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:45.970953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:45.971004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:45.987515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:45.987555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.003998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.004041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.014415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.014457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.025304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.025345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.035743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.035793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.046877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.046913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.059463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.059499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.076578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.076615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.092859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.092896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.110147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.110188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.126795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.126834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.142182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.142219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.693 [2024-07-15 12:55:46.151840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.693 [2024-07-15 12:55:46.151876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.693 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.167066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.167107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.182688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.182727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.192954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.192991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.208177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.208216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.225296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.225343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.240915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.240958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.251724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.251776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.266666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.266710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.283785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.283824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.294616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.294655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.305252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.305290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.315953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.315990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.330524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.330564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.340518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.340556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.354648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.354697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.369982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.370022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.379592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.379631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.395008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.395049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:33.951 [2024-07-15 12:55:46.404874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.951 [2024-07-15 12:55:46.404913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.951 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.419682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.419728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.435543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.435585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.445435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.445473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.457037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.457075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.469532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.469569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.479319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.479356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.493591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.493629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.508617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.508655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.525216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.525255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.542257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.542306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.557660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.557700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.568134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.568172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.583510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.583549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.593639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.593692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.607997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.608033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.210 [2024-07-15 12:55:46.617970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.210 [2024-07-15 12:55:46.618009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.210 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.211 [2024-07-15 12:55:46.632682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.211 [2024-07-15 12:55:46.632726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.211 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.211 [2024-07-15 12:55:46.650708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.211 [2024-07-15 12:55:46.650749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.211 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.211 [2024-07-15 12:55:46.666349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.211 [2024-07-15 12:55:46.666396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.211 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.682869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.682909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.701395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.701437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.716352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.716392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.725869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.725906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.737415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.737454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.748456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.748497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.763393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.763445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.779724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.779800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.794526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.794573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.810068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.810133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.825199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.825240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.840816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.840858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.858819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.858864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.874230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.874286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.884036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.884073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.894395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.894426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.905816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.905848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.918232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.918269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.469 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.469 [2024-07-15 12:55:46.935339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.469 [2024-07-15 12:55:46.935391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:46.951444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:46.951487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:46.962020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:46.962060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:46.972972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:46.973013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:46.990977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:46.991025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.006886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.006933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.023949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.023996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.040456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.040511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.056556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.056601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.066179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.066223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.082426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.082473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.098477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.098548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.114945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.115005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.131341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.131381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.147921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.147962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.164532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.164574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.174512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.727 [2024-07-15 12:55:47.174559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.727 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.727 [2024-07-15 12:55:47.185553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.728 [2024-07-15 12:55:47.185592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.728 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.201885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.201944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.211535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.211575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.226690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.226749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.242105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.242153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.252342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.252383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.267237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.267298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.282138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.282195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.300140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.300213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.315423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.315474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.333382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.333430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.348384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.348442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.364384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.364440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.382095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.382158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.397585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.397639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.407222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.407260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.423203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.423247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.433141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.433186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:34.987 [2024-07-15 12:55:47.447552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.987 [2024-07-15 12:55:47.447625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.987 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.458503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.458565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.473259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.473315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.489214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.489258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.499878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.499941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.514997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.515065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.530864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.530933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.541466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.541529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.556547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.556600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.571744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.571815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.588756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.588818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.599352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.599390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.614201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.614266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.630806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.630856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.647834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.647869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.661270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.661308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 00:11:35.246 Latency(us) 00:11:35.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.246 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:35.246 Nvme1n1 : 5.01 11285.65 88.17 0.00 0.00 11326.16 4885.41 20018.27 00:11:35.246 =================================================================================================================== 00:11:35.246 Total : 11285.65 88.17 0.00 0.00 11326.16 4885.41 20018.27 00:11:35.246 [2024-07-15 12:55:47.671388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.671444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.683393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.683455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.246 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.246 [2024-07-15 12:55:47.695399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.246 [2024-07-15 12:55:47.695462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.247 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.247 [2024-07-15 12:55:47.707390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.247 [2024-07-15 12:55:47.707447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.247 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.719422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.719481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.731403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.731452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.743399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.743476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.755401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.755446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.767389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.767429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.779407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.779452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.791401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.791446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.799366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.799395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.811413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.811458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.823381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.823411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 [2024-07-15 12:55:47.835382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.506 [2024-07-15 12:55:47.835412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.506 2024/07/15 12:55:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:35.506 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76221) - No such process 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76221 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.506 delay0 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.506 12:55:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:35.764 [2024-07-15 12:55:48.026053] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:42.335 Initializing NVMe Controllers 00:11:42.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:42.335 Initialization complete. Launching workers. 00:11:42.335 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 88 00:11:42.335 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 375, failed to submit 33 00:11:42.335 success 202, unsuccess 173, failed 0 00:11:42.335 12:55:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:42.335 12:55:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:42.335 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # nvmfcleanup 00:11:42.335 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.336 rmmod nvme_tcp 00:11:42.336 rmmod nvme_fabrics 00:11:42.336 rmmod nvme_keyring 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@493 -- # '[' -n 76053 ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@494 -- # killprocess 76053 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 76053 ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 76053 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76053 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:42.336 killing process with pid 76053 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76053' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 76053 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 76053 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@282 -- # remove_spdk_ns 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:11:42.336 00:11:42.336 real 0m24.342s 00:11:42.336 user 0m39.710s 00:11:42.336 sys 0m6.136s 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.336 12:55:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.336 ************************************ 00:11:42.336 END TEST nvmf_zcopy 00:11:42.336 ************************************ 00:11:42.336 12:55:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:42.336 12:55:54 nvmf_tcp -- nvmf/nvmf.sh@58 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:42.336 12:55:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:42.336 12:55:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.336 12:55:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.336 ************************************ 00:11:42.336 START TEST nvmf_nmic 00:11:42.336 ************************************ 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:42.336 * Looking for test storage... 00:11:42.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.336 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@452 -- # prepare_net_devs 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # local -g is_hw=no 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # remove_spdk_ns 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@436 -- # nvmf_veth_init 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:11:42.336 Cannot find device "nvmf_tgt_br" 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.336 Cannot find device "nvmf_tgt_br2" 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # true 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:11:42.336 Cannot find device "nvmf_tgt_br" 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:11:42.336 Cannot find device "nvmf_tgt_br2" 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:11:42.336 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:11:42.337 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:11:42.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:11:42.595 00:11:42.595 --- 10.0.0.2 ping statistics --- 00:11:42.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.595 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:11:42.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:42.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:11:42.595 00:11:42.595 --- 10.0.0.3 ping statistics --- 00:11:42.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.595 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:42.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:42.595 00:11:42.595 --- 10.0.0.1 ping statistics --- 00:11:42.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.595 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@437 -- # return 0 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@485 -- # nvmfpid=76540 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@486 -- # waitforlisten 76540 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76540 ']' 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.595 12:55:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.595 [2024-07-15 12:55:54.964442] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:11:42.595 [2024-07-15 12:55:54.964548] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.853 [2024-07-15 12:55:55.105392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.853 [2024-07-15 12:55:55.166507] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.853 [2024-07-15 12:55:55.166562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.853 [2024-07-15 12:55:55.166573] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.853 [2024-07-15 12:55:55.166582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.853 [2024-07-15 12:55:55.166589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.853 [2024-07-15 12:55:55.166736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.853 [2024-07-15 12:55:55.166864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.853 [2024-07-15 12:55:55.166941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.853 [2024-07-15 12:55:55.166942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:42.853 [2024-07-15 12:55:55.290107] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.853 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.110 Malloc0 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.110 [2024-07-15 12:55:55.350434] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.110 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.111 test case1: single bdev can't be used in multiple subsystems 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.111 [2024-07-15 12:55:55.374206] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:43.111 [2024-07-15 12:55:55.374251] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:43.111 [2024-07-15 12:55:55.374263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.111 2024/07/15 12:55:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.111 request: 00:11:43.111 { 00:11:43.111 "method": "nvmf_subsystem_add_ns", 00:11:43.111 "params": { 00:11:43.111 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:43.111 "namespace": { 00:11:43.111 "bdev_name": "Malloc0", 00:11:43.111 "no_auto_visible": false 00:11:43.111 } 00:11:43.111 } 00:11:43.111 } 00:11:43.111 Got JSON-RPC error response 00:11:43.111 GoRPCClient: error on JSON-RPC call 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:43.111 Adding namespace failed - expected result. 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:43.111 test case2: host connect to nvmf target in multiple paths 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.111 [2024-07-15 12:55:55.386387] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.111 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:43.368 12:55:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.368 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:43.368 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.368 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:43.368 12:55:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:45.263 12:55:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:45.263 12:55:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.263 12:55:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:45.521 12:55:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:45.521 12:55:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.521 12:55:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:45.521 12:55:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:45.521 [global] 00:11:45.521 thread=1 00:11:45.521 invalidate=1 00:11:45.521 rw=write 00:11:45.521 time_based=1 00:11:45.521 runtime=1 00:11:45.521 ioengine=libaio 00:11:45.521 direct=1 00:11:45.521 bs=4096 00:11:45.521 iodepth=1 00:11:45.521 norandommap=0 00:11:45.521 numjobs=1 00:11:45.521 00:11:45.521 verify_dump=1 00:11:45.521 verify_backlog=512 00:11:45.521 verify_state_save=0 00:11:45.521 do_verify=1 00:11:45.521 verify=crc32c-intel 00:11:45.521 [job0] 00:11:45.521 filename=/dev/nvme0n1 00:11:45.521 Could not set queue depth (nvme0n1) 00:11:45.521 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.521 fio-3.35 00:11:45.521 Starting 1 thread 00:11:46.894 00:11:46.894 job0: (groupid=0, jobs=1): err= 0: pid=76631: Mon Jul 15 12:55:59 2024 00:11:46.894 read: IOPS=3170, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec) 00:11:46.894 slat (nsec): min=14726, max=51209, avg=17709.22, stdev=3080.20 00:11:46.894 clat (usec): min=126, max=600, avg=145.75, stdev=16.14 00:11:46.894 lat (usec): min=142, max=632, avg=163.46, stdev=16.85 00:11:46.894 clat percentiles (usec): 00:11:46.894 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:11:46.894 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 143], 60.00th=[ 145], 00:11:46.894 | 70.00th=[ 149], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 161], 00:11:46.894 | 99.00th=[ 192], 99.50th=[ 219], 99.90th=[ 351], 99.95th=[ 529], 00:11:46.894 | 99.99th=[ 603] 00:11:46.894 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:46.894 slat (usec): min=20, max=129, avg=27.70, stdev= 7.76 00:11:46.894 clat (usec): min=86, max=223, avg=102.87, stdev= 8.89 00:11:46.894 lat (usec): min=109, max=353, avg=130.58, stdev=13.46 00:11:46.894 clat percentiles (usec): 00:11:46.894 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 95], 20.00th=[ 97], 00:11:46.894 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 103], 00:11:46.894 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 119], 00:11:46.894 | 99.00th=[ 137], 99.50th=[ 151], 99.90th=[ 163], 99.95th=[ 172], 00:11:46.894 | 99.99th=[ 223] 00:11:46.894 bw ( KiB/s): min=14304, max=14304, per=99.88%, avg=14304.00, stdev= 0.00, samples=1 00:11:46.894 iops : min= 3576, max= 3576, avg=3576.00, stdev= 0.00, samples=1 00:11:46.894 lat (usec) : 100=23.68%, 250=76.24%, 500=0.06%, 750=0.03% 00:11:46.894 cpu : usr=2.30%, sys=12.00%, ctx=6758, majf=0, minf=2 00:11:46.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.894 issued rwts: total=3174,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.894 00:11:46.894 Run status group 0 (all jobs): 00:11:46.895 READ: bw=12.4MiB/s (13.0MB/s), 12.4MiB/s-12.4MiB/s (13.0MB/s-13.0MB/s), io=12.4MiB (13.0MB), run=1001-1001msec 00:11:46.895 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:11:46.895 00:11:46.895 Disk stats (read/write): 00:11:46.895 nvme0n1: ios=2995/3072, merge=0/0, ticks=463/353, in_queue=816, util=91.28% 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # nvmfcleanup 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.895 rmmod nvme_tcp 00:11:46.895 rmmod nvme_fabrics 00:11:46.895 rmmod nvme_keyring 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@493 -- # '[' -n 76540 ']' 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@494 -- # killprocess 76540 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76540 ']' 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76540 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76540 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.895 killing process with pid 76540 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76540' 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76540 00:11:46.895 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76540 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@282 -- # remove_spdk_ns 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:11:47.154 00:11:47.154 real 0m5.003s 00:11:47.154 user 0m16.434s 00:11:47.154 sys 0m1.236s 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.154 12:55:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.154 ************************************ 00:11:47.154 END TEST nvmf_nmic 00:11:47.154 ************************************ 00:11:47.154 12:55:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:47.154 12:55:59 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:47.154 12:55:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:47.154 12:55:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.154 12:55:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:47.154 ************************************ 00:11:47.154 START TEST nvmf_fio_target 00:11:47.154 ************************************ 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:47.154 * Looking for test storage... 00:11:47.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.154 12:55:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.155 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@452 -- # prepare_net_devs 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # local -g is_hw=no 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # remove_spdk_ns 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@436 -- # nvmf_veth_init 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:47.155 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:11:47.414 Cannot find device "nvmf_tgt_br" 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.414 Cannot find device "nvmf_tgt_br2" 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # true 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:11:47.414 Cannot find device "nvmf_tgt_br" 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:11:47.414 Cannot find device "nvmf_tgt_br2" 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.414 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:11:47.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:11:47.672 00:11:47.672 --- 10.0.0.2 ping statistics --- 00:11:47.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.672 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:11:47.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:11:47.672 00:11:47.672 --- 10.0.0.3 ping statistics --- 00:11:47.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.672 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:47.672 00:11:47.672 --- 10.0.0.1 ping statistics --- 00:11:47.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.672 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@437 -- # return 0 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@485 -- # nvmfpid=76808 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@486 -- # waitforlisten 76808 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76808 ']' 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.672 12:55:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.672 [2024-07-15 12:55:59.996713] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:11:47.672 [2024-07-15 12:55:59.996826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.672 [2024-07-15 12:56:00.136329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.930 [2024-07-15 12:56:00.205647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.930 [2024-07-15 12:56:00.205708] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.930 [2024-07-15 12:56:00.205722] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.930 [2024-07-15 12:56:00.205733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.930 [2024-07-15 12:56:00.205742] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.930 [2024-07-15 12:56:00.206088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.930 [2024-07-15 12:56:00.206140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.930 [2024-07-15 12:56:00.206345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.930 [2024-07-15 12:56:00.206351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.930 12:56:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.930 12:56:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:47.930 12:56:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:11:47.930 12:56:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:47.930 12:56:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.930 12:56:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.930 12:56:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:48.186 [2024-07-15 12:56:00.600099] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.186 12:56:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:48.750 12:56:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:48.750 12:56:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:49.008 12:56:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:49.008 12:56:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:49.008 12:56:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:49.265 12:56:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:49.521 12:56:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:49.521 12:56:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:49.779 12:56:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:50.036 12:56:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:50.036 12:56:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:50.294 12:56:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:50.294 12:56:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:50.859 12:56:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:50.859 12:56:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:51.117 12:56:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.117 12:56:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:51.117 12:56:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.681 12:56:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:51.681 12:56:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.681 12:56:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.246 [2024-07-15 12:56:04.410250] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.246 12:56:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:52.246 12:56:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:52.503 12:56:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.761 12:56:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:52.761 12:56:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.761 12:56:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.761 12:56:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:52.761 12:56:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:52.761 12:56:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.659 12:56:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.659 12:56:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.659 12:56:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.659 12:56:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:54.659 12:56:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.659 12:56:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:54.660 12:56:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:54.660 [global] 00:11:54.660 thread=1 00:11:54.660 invalidate=1 00:11:54.660 rw=write 00:11:54.660 time_based=1 00:11:54.660 runtime=1 00:11:54.660 ioengine=libaio 00:11:54.660 direct=1 00:11:54.660 bs=4096 00:11:54.660 iodepth=1 00:11:54.660 norandommap=0 00:11:54.660 numjobs=1 00:11:54.660 00:11:54.660 verify_dump=1 00:11:54.660 verify_backlog=512 00:11:54.660 verify_state_save=0 00:11:54.660 do_verify=1 00:11:54.660 verify=crc32c-intel 00:11:54.660 [job0] 00:11:54.660 filename=/dev/nvme0n1 00:11:54.660 [job1] 00:11:54.660 filename=/dev/nvme0n2 00:11:54.660 [job2] 00:11:54.660 filename=/dev/nvme0n3 00:11:54.660 [job3] 00:11:54.660 filename=/dev/nvme0n4 00:11:54.948 Could not set queue depth (nvme0n1) 00:11:54.949 Could not set queue depth (nvme0n2) 00:11:54.949 Could not set queue depth (nvme0n3) 00:11:54.949 Could not set queue depth (nvme0n4) 00:11:54.949 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:54.949 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:54.949 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:54.949 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:54.949 fio-3.35 00:11:54.949 Starting 4 threads 00:11:56.323 00:11:56.323 job0: (groupid=0, jobs=1): err= 0: pid=77092: Mon Jul 15 12:56:08 2024 00:11:56.323 read: IOPS=1577, BW=6310KiB/s (6461kB/s)(6316KiB/1001msec) 00:11:56.323 slat (nsec): min=11488, max=87291, avg=16197.63, stdev=4197.82 00:11:56.323 clat (usec): min=173, max=385, avg=289.26, stdev=15.79 00:11:56.323 lat (usec): min=205, max=411, avg=305.46, stdev=15.19 00:11:56.323 clat percentiles (usec): 00:11:56.323 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 277], 00:11:56.323 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:11:56.323 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 314], 00:11:56.323 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 383], 99.95th=[ 388], 00:11:56.323 | 99.99th=[ 388] 00:11:56.323 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:56.323 slat (usec): min=12, max=286, avg=26.01, stdev= 7.94 00:11:56.323 clat (usec): min=66, max=840, avg=223.38, stdev=22.73 00:11:56.323 lat (usec): min=128, max=870, avg=249.38, stdev=22.10 00:11:56.323 clat percentiles (usec): 00:11:56.323 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:11:56.323 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:11:56.323 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 251], 00:11:56.323 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 351], 99.95th=[ 388], 00:11:56.323 | 99.99th=[ 840] 00:11:56.323 bw ( KiB/s): min= 8192, max= 8192, per=21.66%, avg=8192.00, stdev= 0.00, samples=1 00:11:56.323 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:56.323 lat (usec) : 100=0.03%, 250=53.63%, 500=46.32%, 1000=0.03% 00:11:56.323 cpu : usr=1.40%, sys=6.20%, ctx=3630, majf=0, minf=9 00:11:56.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.324 issued rwts: total=1579,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.324 job1: (groupid=0, jobs=1): err= 0: pid=77093: Mon Jul 15 12:56:08 2024 00:11:56.324 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:56.324 slat (nsec): min=12381, max=46188, avg=17679.15, stdev=3379.56 00:11:56.324 clat (usec): min=143, max=400, avg=188.66, stdev=44.07 00:11:56.324 lat (usec): min=162, max=415, avg=206.34, stdev=43.55 00:11:56.324 clat percentiles (usec): 00:11:56.324 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:11:56.324 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:11:56.324 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 262], 95.00th=[ 293], 00:11:56.324 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 383], 99.95th=[ 388], 00:11:56.324 | 99.99th=[ 400] 00:11:56.324 write: IOPS=2806, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec); 0 zone resets 00:11:56.324 slat (nsec): min=12675, max=81470, avg=25945.11, stdev=6660.38 00:11:56.324 clat (usec): min=107, max=337, avg=138.30, stdev=30.61 00:11:56.324 lat (usec): min=130, max=359, avg=164.24, stdev=30.41 00:11:56.324 clat percentiles (usec): 00:11:56.324 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 123], 00:11:56.324 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:11:56.324 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 155], 95.00th=[ 202], 00:11:56.324 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 302], 99.95th=[ 302], 00:11:56.324 | 99.99th=[ 338] 00:11:56.324 bw ( KiB/s): min=12288, max=12288, per=32.49%, avg=12288.00, stdev= 0.00, samples=1 00:11:56.324 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:56.324 lat (usec) : 250=92.61%, 500=7.39% 00:11:56.324 cpu : usr=2.10%, sys=9.00%, ctx=5370, majf=0, minf=5 00:11:56.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.324 issued rwts: total=2560,2809,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.324 job2: (groupid=0, jobs=1): err= 0: pid=77094: Mon Jul 15 12:56:08 2024 00:11:56.324 read: IOPS=2375, BW=9502KiB/s (9731kB/s)(9512KiB/1001msec) 00:11:56.324 slat (nsec): min=12160, max=66867, avg=20486.15, stdev=6351.48 00:11:56.324 clat (usec): min=153, max=7798, avg=202.07, stdev=170.51 00:11:56.324 lat (usec): min=168, max=7814, avg=222.55, stdev=170.25 00:11:56.324 clat percentiles (usec): 00:11:56.324 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:11:56.324 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:11:56.324 | 70.00th=[ 196], 80.00th=[ 223], 90.00th=[ 269], 95.00th=[ 293], 00:11:56.324 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 693], 99.95th=[ 2769], 00:11:56.324 | 99.99th=[ 7767] 00:11:56.324 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:56.324 slat (usec): min=12, max=128, avg=29.14, stdev=10.17 00:11:56.324 clat (usec): min=110, max=7592, avg=150.57, stdev=165.03 00:11:56.324 lat (usec): min=136, max=7634, avg=179.71, stdev=165.27 00:11:56.324 clat percentiles (usec): 00:11:56.324 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 127], 00:11:56.324 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:11:56.324 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 180], 95.00th=[ 243], 00:11:56.324 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 1598], 99.95th=[ 3163], 00:11:56.324 | 99.99th=[ 7570] 00:11:56.324 bw ( KiB/s): min=12288, max=12288, per=32.49%, avg=12288.00, stdev= 0.00, samples=1 00:11:56.324 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:56.324 lat (usec) : 250=91.23%, 500=8.61%, 750=0.06% 00:11:56.324 lat (msec) : 2=0.02%, 4=0.04%, 10=0.04% 00:11:56.324 cpu : usr=2.70%, sys=8.90%, ctx=4940, majf=0, minf=10 00:11:56.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.324 issued rwts: total=2378,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.324 job3: (groupid=0, jobs=1): err= 0: pid=77095: Mon Jul 15 12:56:08 2024 00:11:56.324 read: IOPS=1577, BW=6310KiB/s (6461kB/s)(6316KiB/1001msec) 00:11:56.324 slat (nsec): min=11735, max=86593, avg=17157.85, stdev=4789.07 00:11:56.324 clat (usec): min=245, max=363, avg=288.43, stdev=14.72 00:11:56.324 lat (usec): min=267, max=379, avg=305.59, stdev=14.12 00:11:56.324 clat percentiles (usec): 00:11:56.324 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:11:56.324 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:11:56.324 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 314], 00:11:56.324 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 363], 00:11:56.324 | 99.99th=[ 363] 00:11:56.324 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:56.324 slat (nsec): min=12498, max=53495, avg=26166.82, stdev=5142.25 00:11:56.324 clat (usec): min=128, max=759, avg=223.10, stdev=20.28 00:11:56.324 lat (usec): min=151, max=792, avg=249.27, stdev=19.91 00:11:56.324 clat percentiles (usec): 00:11:56.324 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 210], 00:11:56.324 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:11:56.324 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 249], 00:11:56.324 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 318], 99.95th=[ 326], 00:11:56.324 | 99.99th=[ 758] 00:11:56.324 bw ( KiB/s): min= 8192, max= 8192, per=21.66%, avg=8192.00, stdev= 0.00, samples=1 00:11:56.324 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:56.324 lat (usec) : 250=54.26%, 500=45.71%, 1000=0.03% 00:11:56.324 cpu : usr=1.60%, sys=6.10%, ctx=3634, majf=0, minf=11 00:11:56.324 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:56.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:56.324 issued rwts: total=1579,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:56.324 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:56.324 00:11:56.324 Run status group 0 (all jobs): 00:11:56.324 READ: bw=31.6MiB/s (33.1MB/s), 6310KiB/s-9.99MiB/s (6461kB/s-10.5MB/s), io=31.6MiB (33.2MB), run=1001-1001msec 00:11:56.324 WRITE: bw=36.9MiB/s (38.7MB/s), 8184KiB/s-11.0MiB/s (8380kB/s-11.5MB/s), io=37.0MiB (38.8MB), run=1001-1001msec 00:11:56.324 00:11:56.324 Disk stats (read/write): 00:11:56.324 nvme0n1: ios=1586/1565, merge=0/0, ticks=474/352, in_queue=826, util=88.78% 00:11:56.324 nvme0n2: ios=2373/2560, merge=0/0, ticks=448/363, in_queue=811, util=89.60% 00:11:56.324 nvme0n3: ios=2048/2432, merge=0/0, ticks=387/377, in_queue=764, util=88.59% 00:11:56.324 nvme0n4: ios=1536/1565, merge=0/0, ticks=450/371, in_queue=821, util=89.88% 00:11:56.324 12:56:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:56.324 [global] 00:11:56.324 thread=1 00:11:56.324 invalidate=1 00:11:56.324 rw=randwrite 00:11:56.324 time_based=1 00:11:56.324 runtime=1 00:11:56.324 ioengine=libaio 00:11:56.324 direct=1 00:11:56.324 bs=4096 00:11:56.324 iodepth=1 00:11:56.324 norandommap=0 00:11:56.324 numjobs=1 00:11:56.324 00:11:56.324 verify_dump=1 00:11:56.324 verify_backlog=512 00:11:56.324 verify_state_save=0 00:11:56.324 do_verify=1 00:11:56.324 verify=crc32c-intel 00:11:56.324 [job0] 00:11:56.324 filename=/dev/nvme0n1 00:11:56.324 [job1] 00:11:56.324 filename=/dev/nvme0n2 00:11:56.324 [job2] 00:11:56.324 filename=/dev/nvme0n3 00:11:56.324 [job3] 00:11:56.324 filename=/dev/nvme0n4 00:11:56.324 Could not set queue depth (nvme0n1) 00:11:56.324 Could not set queue depth (nvme0n2) 00:11:56.324 Could not set queue depth (nvme0n3) 00:11:56.324 Could not set queue depth (nvme0n4) 00:11:56.324 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.324 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.324 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.324 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.324 fio-3.35 00:11:56.324 Starting 4 threads 00:11:57.697 00:11:57.697 job0: (groupid=0, jobs=1): err= 0: pid=77154: Mon Jul 15 12:56:09 2024 00:11:57.697 read: IOPS=2470, BW=9882KiB/s (10.1MB/s)(9892KiB/1001msec) 00:11:57.697 slat (nsec): min=13947, max=61373, avg=17597.61, stdev=3987.29 00:11:57.697 clat (usec): min=145, max=571, avg=208.01, stdev=54.97 00:11:57.697 lat (usec): min=162, max=594, avg=225.61, stdev=56.40 00:11:57.697 clat percentiles (usec): 00:11:57.697 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:11:57.697 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 188], 00:11:57.697 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:11:57.697 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 529], 99.95th=[ 537], 00:11:57.697 | 99.99th=[ 570] 00:11:57.697 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:57.697 slat (usec): min=19, max=314, avg=25.15, stdev=10.69 00:11:57.697 clat (usec): min=4, max=1576, avg=143.74, stdev=57.59 00:11:57.697 lat (usec): min=122, max=1597, avg=168.89, stdev=59.41 00:11:57.697 clat percentiles (usec): 00:11:57.697 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:11:57.697 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 131], 00:11:57.697 | 70.00th=[ 135], 80.00th=[ 143], 90.00th=[ 227], 95.00th=[ 245], 00:11:57.697 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 396], 99.95th=[ 1369], 00:11:57.697 | 99.99th=[ 1582] 00:11:57.697 bw ( KiB/s): min=12288, max=12288, per=35.41%, avg=12288.00, stdev= 0.00, samples=1 00:11:57.697 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:57.697 lat (usec) : 10=0.02%, 20=0.02%, 50=0.02%, 100=0.08%, 250=82.00% 00:11:57.697 lat (usec) : 500=17.76%, 750=0.06% 00:11:57.697 lat (msec) : 2=0.04% 00:11:57.697 cpu : usr=2.10%, sys=7.90%, ctx=5047, majf=0, minf=11 00:11:57.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.697 issued rwts: total=2473,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.697 job1: (groupid=0, jobs=1): err= 0: pid=77155: Mon Jul 15 12:56:09 2024 00:11:57.697 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:57.697 slat (usec): min=11, max=1271, avg=22.31, stdev=33.93 00:11:57.697 clat (nsec): min=1426, max=42091k, avg=324361.20, stdev=1069272.69 00:11:57.697 lat (usec): min=191, max=42108, avg=346.67, stdev=1069.45 00:11:57.697 clat percentiles (usec): 00:11:57.697 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:11:57.697 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:11:57.697 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 330], 00:11:57.697 | 99.00th=[ 441], 99.50th=[ 685], 99.90th=[ 2769], 99.95th=[42206], 00:11:57.697 | 99.99th=[42206] 00:11:57.697 write: IOPS=1837, BW=7349KiB/s (7525kB/s)(7356KiB/1001msec); 0 zone resets 00:11:57.697 slat (nsec): min=14344, max=96876, avg=25469.47, stdev=5140.07 00:11:57.697 clat (usec): min=138, max=356, avg=224.74, stdev=15.88 00:11:57.697 lat (usec): min=159, max=379, avg=250.21, stdev=15.90 00:11:57.697 clat percentiles (usec): 00:11:57.697 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:11:57.697 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:11:57.697 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:11:57.697 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 322], 99.95th=[ 359], 00:11:57.697 | 99.99th=[ 359] 00:11:57.697 bw ( KiB/s): min= 8192, max= 8192, per=23.61%, avg=8192.00, stdev= 0.00, samples=1 00:11:57.697 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:57.697 lat (usec) : 2=0.03%, 4=0.03%, 250=51.97%, 500=47.64%, 750=0.18% 00:11:57.697 lat (usec) : 1000=0.06% 00:11:57.697 lat (msec) : 2=0.03%, 4=0.03%, 50=0.03% 00:11:57.697 cpu : usr=1.90%, sys=5.80%, ctx=3377, majf=0, minf=11 00:11:57.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.697 issued rwts: total=1536,1839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.697 job2: (groupid=0, jobs=1): err= 0: pid=77156: Mon Jul 15 12:56:09 2024 00:11:57.697 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:57.697 slat (nsec): min=13312, max=82090, avg=17120.26, stdev=5336.29 00:11:57.697 clat (usec): min=152, max=7380, avg=224.60, stdev=251.85 00:11:57.697 lat (usec): min=167, max=7394, avg=241.72, stdev=253.01 00:11:57.697 clat percentiles (usec): 00:11:57.697 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:11:57.697 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:11:57.697 | 70.00th=[ 192], 80.00th=[ 285], 90.00th=[ 363], 95.00th=[ 396], 00:11:57.697 | 99.00th=[ 424], 99.50th=[ 465], 99.90th=[ 3425], 99.95th=[ 7242], 00:11:57.697 | 99.99th=[ 7373] 00:11:57.697 write: IOPS=2444, BW=9778KiB/s (10.0MB/s)(9788KiB/1001msec); 0 zone resets 00:11:57.697 slat (usec): min=19, max=112, avg=27.13, stdev= 9.50 00:11:57.697 clat (usec): min=111, max=7275, avg=175.54, stdev=184.81 00:11:57.697 lat (usec): min=133, max=7306, avg=202.68, stdev=187.05 00:11:57.697 clat percentiles (usec): 00:11:57.697 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 128], 00:11:57.697 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 143], 00:11:57.697 | 70.00th=[ 212], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 269], 00:11:57.697 | 99.00th=[ 351], 99.50th=[ 383], 99.90th=[ 3589], 99.95th=[ 3752], 00:11:57.697 | 99.99th=[ 7308] 00:11:57.697 bw ( KiB/s): min=12288, max=12288, per=35.41%, avg=12288.00, stdev= 0.00, samples=1 00:11:57.697 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:57.697 lat (usec) : 250=81.71%, 500=18.02%, 750=0.04%, 1000=0.04% 00:11:57.697 lat (msec) : 2=0.02%, 4=0.09%, 10=0.07% 00:11:57.697 cpu : usr=2.20%, sys=7.30%, ctx=4496, majf=0, minf=12 00:11:57.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.697 issued rwts: total=2048,2447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.697 job3: (groupid=0, jobs=1): err= 0: pid=77157: Mon Jul 15 12:56:09 2024 00:11:57.697 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:57.697 slat (usec): min=10, max=2487, avg=20.26, stdev=64.38 00:11:57.697 clat (usec): min=183, max=42023, avg=326.98, stdev=1066.80 00:11:57.697 lat (usec): min=196, max=42039, avg=347.24, stdev=1069.05 00:11:57.697 clat percentiles (usec): 00:11:57.697 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:11:57.697 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:11:57.697 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 338], 00:11:57.697 | 99.00th=[ 424], 99.50th=[ 498], 99.90th=[ 2540], 99.95th=[42206], 00:11:57.697 | 99.99th=[42206] 00:11:57.697 write: IOPS=1836, BW=7345KiB/s (7521kB/s)(7352KiB/1001msec); 0 zone resets 00:11:57.697 slat (nsec): min=16430, max=86635, avg=25604.07, stdev=5159.45 00:11:57.697 clat (usec): min=127, max=339, avg=224.44, stdev=16.63 00:11:57.697 lat (usec): min=152, max=356, avg=250.04, stdev=16.26 00:11:57.697 clat percentiles (usec): 00:11:57.697 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:11:57.697 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 227], 00:11:57.697 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:11:57.697 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 322], 99.95th=[ 338], 00:11:57.697 | 99.99th=[ 338] 00:11:57.697 bw ( KiB/s): min= 8192, max= 8192, per=23.61%, avg=8192.00, stdev= 0.00, samples=1 00:11:57.697 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:57.697 lat (usec) : 250=51.75%, 500=48.04%, 750=0.06%, 1000=0.09% 00:11:57.697 lat (msec) : 4=0.03%, 50=0.03% 00:11:57.697 cpu : usr=2.10%, sys=5.30%, ctx=3381, majf=0, minf=11 00:11:57.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:57.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.697 issued rwts: total=1536,1838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:57.697 00:11:57.697 Run status group 0 (all jobs): 00:11:57.697 READ: bw=29.6MiB/s (31.1MB/s), 6138KiB/s-9882KiB/s (6285kB/s-10.1MB/s), io=29.7MiB (31.1MB), run=1001-1001msec 00:11:57.698 WRITE: bw=33.9MiB/s (35.5MB/s), 7345KiB/s-9.99MiB/s (7521kB/s-10.5MB/s), io=33.9MiB (35.6MB), run=1001-1001msec 00:11:57.698 00:11:57.698 Disk stats (read/write): 00:11:57.698 nvme0n1: ios=2098/2503, merge=0/0, ticks=446/382, in_queue=828, util=89.18% 00:11:57.698 nvme0n2: ios=1405/1536, merge=0/0, ticks=472/352, in_queue=824, util=89.19% 00:11:57.698 nvme0n3: ios=1998/2048, merge=0/0, ticks=444/353, in_queue=797, util=87.98% 00:11:57.698 nvme0n4: ios=1382/1536, merge=0/0, ticks=550/371, in_queue=921, util=91.00% 00:11:57.698 12:56:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:57.698 [global] 00:11:57.698 thread=1 00:11:57.698 invalidate=1 00:11:57.698 rw=write 00:11:57.698 time_based=1 00:11:57.698 runtime=1 00:11:57.698 ioengine=libaio 00:11:57.698 direct=1 00:11:57.698 bs=4096 00:11:57.698 iodepth=128 00:11:57.698 norandommap=0 00:11:57.698 numjobs=1 00:11:57.698 00:11:57.698 verify_dump=1 00:11:57.698 verify_backlog=512 00:11:57.698 verify_state_save=0 00:11:57.698 do_verify=1 00:11:57.698 verify=crc32c-intel 00:11:57.698 [job0] 00:11:57.698 filename=/dev/nvme0n1 00:11:57.698 [job1] 00:11:57.698 filename=/dev/nvme0n2 00:11:57.698 [job2] 00:11:57.698 filename=/dev/nvme0n3 00:11:57.698 [job3] 00:11:57.698 filename=/dev/nvme0n4 00:11:57.698 Could not set queue depth (nvme0n1) 00:11:57.698 Could not set queue depth (nvme0n2) 00:11:57.698 Could not set queue depth (nvme0n3) 00:11:57.698 Could not set queue depth (nvme0n4) 00:11:57.698 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.698 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.698 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.698 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:57.698 fio-3.35 00:11:57.698 Starting 4 threads 00:11:59.066 00:11:59.066 job0: (groupid=0, jobs=1): err= 0: pid=77210: Mon Jul 15 12:56:11 2024 00:11:59.066 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:11:59.066 slat (usec): min=8, max=5744, avg=96.89, stdev=457.68 00:11:59.067 clat (usec): min=7643, max=19282, avg=12571.25, stdev=1503.01 00:11:59.067 lat (usec): min=7948, max=19304, avg=12668.13, stdev=1541.29 00:11:59.067 clat percentiles (usec): 00:11:59.067 | 1.00th=[ 8848], 5.00th=[10028], 10.00th=[11338], 20.00th=[11731], 00:11:59.067 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:11:59.067 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14484], 95.00th=[15401], 00:11:59.067 | 99.00th=[17171], 99.50th=[17957], 99.90th=[19006], 99.95th=[19006], 00:11:59.067 | 99.99th=[19268] 00:11:59.067 write: IOPS=5181, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1005msec); 0 zone resets 00:11:59.067 slat (usec): min=11, max=4936, avg=88.50, stdev=374.45 00:11:59.067 clat (usec): min=4360, max=19069, avg=12050.16, stdev=1563.23 00:11:59.067 lat (usec): min=4672, max=19671, avg=12138.66, stdev=1600.69 00:11:59.067 clat percentiles (usec): 00:11:59.067 | 1.00th=[ 6587], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11338], 00:11:59.067 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:11:59.067 | 70.00th=[12518], 80.00th=[12649], 90.00th=[13435], 95.00th=[14615], 00:11:59.067 | 99.00th=[17171], 99.50th=[17433], 99.90th=[19006], 99.95th=[19006], 00:11:59.067 | 99.99th=[19006] 00:11:59.067 bw ( KiB/s): min=20480, max=20480, per=26.33%, avg=20480.00, stdev= 0.00, samples=2 00:11:59.067 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:59.067 lat (msec) : 10=5.54%, 20=94.46% 00:11:59.067 cpu : usr=4.68%, sys=15.24%, ctx=656, majf=0, minf=2 00:11:59.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:59.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.067 issued rwts: total=5120,5207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.067 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.067 job1: (groupid=0, jobs=1): err= 0: pid=77211: Mon Jul 15 12:56:11 2024 00:11:59.067 read: IOPS=5038, BW=19.7MiB/s (20.6MB/s)(19.7MiB/1003msec) 00:11:59.067 slat (usec): min=7, max=3651, avg=96.07, stdev=447.91 00:11:59.067 clat (usec): min=313, max=16222, avg=12731.01, stdev=1239.07 00:11:59.067 lat (usec): min=2574, max=16256, avg=12827.08, stdev=1173.26 00:11:59.067 clat percentiles (usec): 00:11:59.067 | 1.00th=[ 6259], 5.00th=[10814], 10.00th=[11994], 20.00th=[12518], 00:11:59.067 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:11:59.067 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:11:59.067 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15401], 99.95th=[15664], 00:11:59.067 | 99.99th=[16188] 00:11:59.067 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:11:59.067 slat (usec): min=10, max=2895, avg=92.78, stdev=397.74 00:11:59.067 clat (usec): min=9479, max=14948, avg=12173.43, stdev=1230.53 00:11:59.067 lat (usec): min=9506, max=14968, avg=12266.21, stdev=1224.36 00:11:59.067 clat percentiles (usec): 00:11:59.067 | 1.00th=[10159], 5.00th=[10552], 10.00th=[10683], 20.00th=[10814], 00:11:59.067 | 30.00th=[11076], 40.00th=[11338], 50.00th=[12518], 60.00th=[12911], 00:11:59.067 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:11:59.067 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14877], 99.95th=[14877], 00:11:59.067 | 99.99th=[15008] 00:11:59.067 bw ( KiB/s): min=20480, max=20480, per=26.33%, avg=20480.00, stdev= 0.00, samples=2 00:11:59.067 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:59.067 lat (usec) : 500=0.01% 00:11:59.067 lat (msec) : 4=0.31%, 10=1.01%, 20=98.66% 00:11:59.067 cpu : usr=3.59%, sys=15.37%, ctx=461, majf=0, minf=7 00:11:59.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:59.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.067 issued rwts: total=5054,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.067 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.067 job2: (groupid=0, jobs=1): err= 0: pid=77212: Mon Jul 15 12:56:11 2024 00:11:59.067 read: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1002msec) 00:11:59.067 slat (usec): min=6, max=4745, avg=110.22, stdev=567.11 00:11:59.067 clat (usec): min=507, max=24430, avg=14452.40, stdev=1773.20 00:11:59.067 lat (usec): min=4306, max=24442, avg=14562.62, stdev=1809.79 00:11:59.067 clat percentiles (usec): 00:11:59.067 | 1.00th=[ 5211], 5.00th=[11731], 10.00th=[12649], 20.00th=[13960], 00:11:59.067 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:11:59.067 | 70.00th=[15008], 80.00th=[15401], 90.00th=[16057], 95.00th=[17433], 00:11:59.067 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:11:59.067 | 99.99th=[24511] 00:11:59.067 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:11:59.067 slat (usec): min=12, max=4835, avg=106.37, stdev=500.28 00:11:59.067 clat (usec): min=9567, max=22637, avg=14034.44, stdev=1595.54 00:11:59.067 lat (usec): min=9889, max=22681, avg=14140.82, stdev=1579.67 00:11:59.067 clat percentiles (usec): 00:11:59.067 | 1.00th=[10290], 5.00th=[10945], 10.00th=[11338], 20.00th=[13435], 00:11:59.067 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:11:59.067 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[16581], 00:11:59.067 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18744], 00:11:59.067 | 99.99th=[22676] 00:11:59.067 bw ( KiB/s): min=17752, max=19112, per=23.70%, avg=18432.00, stdev=961.67, samples=2 00:11:59.067 iops : min= 4438, max= 4778, avg=4608.00, stdev=240.42, samples=2 00:11:59.067 lat (usec) : 750=0.01% 00:11:59.067 lat (msec) : 10=1.09%, 20=98.81%, 50=0.09% 00:11:59.067 cpu : usr=4.10%, sys=13.89%, ctx=367, majf=0, minf=9 00:11:59.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:59.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.067 issued rwts: total=4279,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.067 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.067 job3: (groupid=0, jobs=1): err= 0: pid=77213: Mon Jul 15 12:56:11 2024 00:11:59.067 read: IOPS=4477, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1003msec) 00:11:59.067 slat (usec): min=8, max=4564, avg=108.47, stdev=533.60 00:11:59.067 clat (usec): min=497, max=19471, avg=14194.26, stdev=1590.39 00:11:59.067 lat (usec): min=4413, max=19858, avg=14302.73, stdev=1615.63 00:11:59.067 clat percentiles (usec): 00:11:59.067 | 1.00th=[ 8979], 5.00th=[11469], 10.00th=[12256], 20.00th=[13566], 00:11:59.067 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:11:59.067 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15401], 95.00th=[15926], 00:11:59.067 | 99.00th=[17433], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:11:59.067 | 99.99th=[19530] 00:11:59.067 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:11:59.067 slat (usec): min=11, max=4232, avg=103.07, stdev=482.22 00:11:59.067 clat (usec): min=9980, max=17713, avg=13647.14, stdev=1389.97 00:11:59.067 lat (usec): min=10011, max=17752, avg=13750.21, stdev=1364.48 00:11:59.067 clat percentiles (usec): 00:11:59.067 | 1.00th=[10290], 5.00th=[10683], 10.00th=[11207], 20.00th=[13042], 00:11:59.067 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:11:59.067 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15270], 00:11:59.067 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:11:59.067 | 99.99th=[17695] 00:11:59.067 bw ( KiB/s): min=17456, max=19408, per=23.70%, avg=18432.00, stdev=1380.27, samples=2 00:11:59.067 iops : min= 4364, max= 4852, avg=4608.00, stdev=345.07, samples=2 00:11:59.067 lat (usec) : 500=0.01% 00:11:59.067 lat (msec) : 10=0.97%, 20=99.02% 00:11:59.067 cpu : usr=4.19%, sys=13.97%, ctx=402, majf=0, minf=11 00:11:59.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:59.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.067 issued rwts: total=4491,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.067 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.067 00:11:59.067 Run status group 0 (all jobs): 00:11:59.067 READ: bw=73.6MiB/s (77.2MB/s), 16.7MiB/s-19.9MiB/s (17.5MB/s-20.9MB/s), io=74.0MiB (77.6MB), run=1002-1005msec 00:11:59.067 WRITE: bw=76.0MiB/s (79.6MB/s), 17.9MiB/s-20.2MiB/s (18.8MB/s-21.2MB/s), io=76.3MiB (80.0MB), run=1002-1005msec 00:11:59.067 00:11:59.067 Disk stats (read/write): 00:11:59.067 nvme0n1: ios=4249/4608, merge=0/0, ticks=25599/24127, in_queue=49726, util=88.78% 00:11:59.067 nvme0n2: ios=4148/4608, merge=0/0, ticks=12065/12057, in_queue=24122, util=87.88% 00:11:59.067 nvme0n3: ios=3584/4008, merge=0/0, ticks=16069/16024, in_queue=32093, util=89.03% 00:11:59.067 nvme0n4: ios=3710/4096, merge=0/0, ticks=16446/15641, in_queue=32087, util=89.69% 00:11:59.067 12:56:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:59.067 [global] 00:11:59.067 thread=1 00:11:59.067 invalidate=1 00:11:59.067 rw=randwrite 00:11:59.067 time_based=1 00:11:59.067 runtime=1 00:11:59.067 ioengine=libaio 00:11:59.067 direct=1 00:11:59.067 bs=4096 00:11:59.067 iodepth=128 00:11:59.067 norandommap=0 00:11:59.067 numjobs=1 00:11:59.067 00:11:59.067 verify_dump=1 00:11:59.067 verify_backlog=512 00:11:59.067 verify_state_save=0 00:11:59.067 do_verify=1 00:11:59.067 verify=crc32c-intel 00:11:59.067 [job0] 00:11:59.067 filename=/dev/nvme0n1 00:11:59.067 [job1] 00:11:59.067 filename=/dev/nvme0n2 00:11:59.067 [job2] 00:11:59.067 filename=/dev/nvme0n3 00:11:59.067 [job3] 00:11:59.067 filename=/dev/nvme0n4 00:11:59.067 Could not set queue depth (nvme0n1) 00:11:59.067 Could not set queue depth (nvme0n2) 00:11:59.067 Could not set queue depth (nvme0n3) 00:11:59.067 Could not set queue depth (nvme0n4) 00:11:59.067 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.067 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.067 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.067 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:59.067 fio-3.35 00:11:59.067 Starting 4 threads 00:12:00.489 00:12:00.489 job0: (groupid=0, jobs=1): err= 0: pid=77266: Mon Jul 15 12:56:12 2024 00:12:00.489 read: IOPS=5014, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1002msec) 00:12:00.489 slat (usec): min=8, max=3059, avg=95.37, stdev=438.37 00:12:00.489 clat (usec): min=370, max=15002, avg=12713.71, stdev=1206.64 00:12:00.489 lat (usec): min=2953, max=16470, avg=12809.08, stdev=1137.70 00:12:00.489 clat percentiles (usec): 00:12:00.489 | 1.00th=[ 6325], 5.00th=[10683], 10.00th=[11600], 20.00th=[12649], 00:12:00.489 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:12:00.489 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13698], 00:12:00.489 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15008], 99.95th=[15008], 00:12:00.489 | 99.99th=[15008] 00:12:00.489 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:12:00.489 slat (usec): min=10, max=3465, avg=93.28, stdev=377.23 00:12:00.489 clat (usec): min=9636, max=14873, avg=12242.85, stdev=1183.68 00:12:00.489 lat (usec): min=9664, max=14894, avg=12336.13, stdev=1176.78 00:12:00.489 clat percentiles (usec): 00:12:00.489 | 1.00th=[10159], 5.00th=[10552], 10.00th=[10683], 20.00th=[10945], 00:12:00.489 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12649], 60.00th=[12911], 00:12:00.489 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:12:00.489 | 99.00th=[14353], 99.50th=[14615], 99.90th=[14877], 99.95th=[14877], 00:12:00.489 | 99.99th=[14877] 00:12:00.489 bw ( KiB/s): min=20480, max=20521, per=26.45%, avg=20500.50, stdev=28.99, samples=2 00:12:00.489 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:12:00.489 lat (usec) : 500=0.01% 00:12:00.489 lat (msec) : 4=0.32%, 10=0.82%, 20=98.86% 00:12:00.489 cpu : usr=4.80%, sys=15.18%, ctx=503, majf=0, minf=9 00:12:00.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:00.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.489 issued rwts: total=5025,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.489 job1: (groupid=0, jobs=1): err= 0: pid=77267: Mon Jul 15 12:56:12 2024 00:12:00.489 read: IOPS=5097, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:12:00.489 slat (usec): min=8, max=3859, avg=95.13, stdev=476.11 00:12:00.489 clat (usec): min=655, max=16457, avg=12465.63, stdev=1375.91 00:12:00.489 lat (usec): min=3019, max=16492, avg=12560.76, stdev=1414.83 00:12:00.489 clat percentiles (usec): 00:12:00.489 | 1.00th=[ 7373], 5.00th=[10290], 10.00th=[11076], 20.00th=[12125], 00:12:00.489 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:12:00.489 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:12:00.489 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16319], 99.95th=[16450], 00:12:00.489 | 99.99th=[16450] 00:12:00.489 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:12:00.489 slat (usec): min=10, max=6786, avg=92.34, stdev=391.44 00:12:00.489 clat (usec): min=8451, max=17860, avg=12219.21, stdev=1285.54 00:12:00.489 lat (usec): min=8475, max=17889, avg=12311.54, stdev=1278.28 00:12:00.489 clat percentiles (usec): 00:12:00.489 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11600], 00:12:00.489 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:12:00.489 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13829], 00:12:00.489 | 99.00th=[15926], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:12:00.489 | 99.99th=[17957] 00:12:00.489 bw ( KiB/s): min=20480, max=20480, per=26.42%, avg=20480.00, stdev= 0.00, samples=2 00:12:00.489 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:12:00.489 lat (usec) : 750=0.01% 00:12:00.489 lat (msec) : 4=0.41%, 10=5.01%, 20=94.57% 00:12:00.489 cpu : usr=5.09%, sys=14.27%, ctx=508, majf=0, minf=19 00:12:00.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:00.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.489 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.489 job2: (groupid=0, jobs=1): err= 0: pid=77268: Mon Jul 15 12:56:12 2024 00:12:00.489 read: IOPS=4159, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1002msec) 00:12:00.489 slat (usec): min=8, max=3766, avg=111.78, stdev=516.61 00:12:00.489 clat (usec): min=1091, max=17508, avg=14484.88, stdev=1421.03 00:12:00.489 lat (usec): min=1103, max=17966, avg=14596.66, stdev=1341.11 00:12:00.489 clat percentiles (usec): 00:12:00.489 | 1.00th=[ 8717], 5.00th=[12256], 10.00th=[13042], 20.00th=[14353], 00:12:00.489 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[14877], 00:12:00.489 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15401], 95.00th=[15664], 00:12:00.489 | 99.00th=[16319], 99.50th=[16712], 99.90th=[17433], 99.95th=[17433], 00:12:00.489 | 99.99th=[17433] 00:12:00.489 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:12:00.489 slat (usec): min=11, max=5693, avg=107.29, stdev=433.18 00:12:00.490 clat (usec): min=11230, max=21371, avg=14237.24, stdev=1505.67 00:12:00.490 lat (usec): min=11267, max=21464, avg=14344.53, stdev=1505.48 00:12:00.490 clat percentiles (usec): 00:12:00.490 | 1.00th=[11731], 5.00th=[12125], 10.00th=[12387], 20.00th=[12649], 00:12:00.490 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14484], 60.00th=[14746], 00:12:00.490 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:12:00.490 | 99.00th=[18220], 99.50th=[19792], 99.90th=[21365], 99.95th=[21365], 00:12:00.490 | 99.99th=[21365] 00:12:00.490 bw ( KiB/s): min=17720, max=19144, per=23.78%, avg=18432.00, stdev=1006.92, samples=2 00:12:00.490 iops : min= 4430, max= 4786, avg=4608.00, stdev=251.73, samples=2 00:12:00.490 lat (msec) : 2=0.09%, 10=0.56%, 20=99.12%, 50=0.23% 00:12:00.490 cpu : usr=4.40%, sys=13.59%, ctx=501, majf=0, minf=13 00:12:00.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:00.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.490 issued rwts: total=4168,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.490 job3: (groupid=0, jobs=1): err= 0: pid=77269: Mon Jul 15 12:56:12 2024 00:12:00.490 read: IOPS=4158, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1004msec) 00:12:00.490 slat (usec): min=8, max=4161, avg=110.24, stdev=512.92 00:12:00.490 clat (usec): min=1649, max=20227, avg=14549.84, stdev=1734.59 00:12:00.490 lat (usec): min=4126, max=21719, avg=14660.09, stdev=1677.57 00:12:00.490 clat percentiles (usec): 00:12:00.490 | 1.00th=[ 8094], 5.00th=[11994], 10.00th=[13042], 20.00th=[14091], 00:12:00.490 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:12:00.490 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16581], 95.00th=[17433], 00:12:00.490 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20317], 99.95th=[20317], 00:12:00.490 | 99.99th=[20317] 00:12:00.490 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:12:00.490 slat (usec): min=11, max=4048, avg=109.14, stdev=468.01 00:12:00.490 clat (usec): min=11267, max=19065, avg=14311.50, stdev=1746.56 00:12:00.490 lat (usec): min=11293, max=19087, avg=14420.63, stdev=1742.20 00:12:00.490 clat percentiles (usec): 00:12:00.490 | 1.00th=[11469], 5.00th=[11863], 10.00th=[12125], 20.00th=[12518], 00:12:00.490 | 30.00th=[12911], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:12:00.490 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16581], 95.00th=[17957], 00:12:00.490 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:12:00.490 | 99.99th=[19006] 00:12:00.490 bw ( KiB/s): min=16585, max=19920, per=23.55%, avg=18252.50, stdev=2358.20, samples=2 00:12:00.490 iops : min= 4146, max= 4980, avg=4563.00, stdev=589.73, samples=2 00:12:00.490 lat (msec) : 2=0.01%, 10=0.73%, 20=99.17%, 50=0.09% 00:12:00.490 cpu : usr=4.19%, sys=13.76%, ctx=468, majf=0, minf=9 00:12:00.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:00.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.490 issued rwts: total=4175,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.490 00:12:00.490 Run status group 0 (all jobs): 00:12:00.490 READ: bw=71.9MiB/s (75.4MB/s), 16.2MiB/s-19.9MiB/s (17.0MB/s-20.9MB/s), io=72.2MiB (75.7MB), run=1002-1004msec 00:12:00.490 WRITE: bw=75.7MiB/s (79.4MB/s), 17.9MiB/s-20.0MiB/s (18.8MB/s-20.9MB/s), io=76.0MiB (79.7MB), run=1002-1004msec 00:12:00.490 00:12:00.490 Disk stats (read/write): 00:12:00.490 nvme0n1: ios=4242/4608, merge=0/0, ticks=12227/12003, in_queue=24230, util=90.58% 00:12:00.490 nvme0n2: ios=4262/4608, merge=0/0, ticks=15934/15929, in_queue=31863, util=88.80% 00:12:00.490 nvme0n3: ios=3584/4026, merge=0/0, ticks=12104/12265, in_queue=24369, util=89.31% 00:12:00.490 nvme0n4: ios=3612/3950, merge=0/0, ticks=12347/12618, in_queue=24965, util=90.51% 00:12:00.490 12:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:00.490 12:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77288 00:12:00.490 12:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:00.490 12:56:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:00.490 [global] 00:12:00.490 thread=1 00:12:00.490 invalidate=1 00:12:00.490 rw=read 00:12:00.490 time_based=1 00:12:00.490 runtime=10 00:12:00.490 ioengine=libaio 00:12:00.490 direct=1 00:12:00.490 bs=4096 00:12:00.490 iodepth=1 00:12:00.490 norandommap=1 00:12:00.490 numjobs=1 00:12:00.490 00:12:00.490 [job0] 00:12:00.490 filename=/dev/nvme0n1 00:12:00.490 [job1] 00:12:00.490 filename=/dev/nvme0n2 00:12:00.490 [job2] 00:12:00.490 filename=/dev/nvme0n3 00:12:00.490 [job3] 00:12:00.490 filename=/dev/nvme0n4 00:12:00.490 Could not set queue depth (nvme0n1) 00:12:00.490 Could not set queue depth (nvme0n2) 00:12:00.490 Could not set queue depth (nvme0n3) 00:12:00.490 Could not set queue depth (nvme0n4) 00:12:00.490 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.490 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.490 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.490 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.490 fio-3.35 00:12:00.490 Starting 4 threads 00:12:03.768 12:56:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:03.768 fio: pid=77336, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:03.768 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=38404096, buflen=4096 00:12:03.768 12:56:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:03.768 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=65912832, buflen=4096 00:12:03.768 fio: pid=77335, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:03.768 12:56:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:03.768 12:56:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:04.027 fio: pid=77332, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:04.027 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=48197632, buflen=4096 00:12:04.027 12:56:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.027 12:56:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:04.286 fio: pid=77333, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:04.286 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=9146368, buflen=4096 00:12:04.545 00:12:04.545 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77332: Mon Jul 15 12:56:16 2024 00:12:04.545 read: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(46.0MiB/3580msec) 00:12:04.545 slat (usec): min=8, max=15283, avg=19.16, stdev=202.13 00:12:04.545 clat (usec): min=143, max=2496, avg=283.58, stdev=69.73 00:12:04.545 lat (usec): min=159, max=15484, avg=302.74, stdev=213.16 00:12:04.545 clat percentiles (usec): 00:12:04.545 | 1.00th=[ 157], 5.00th=[ 176], 10.00th=[ 188], 20.00th=[ 229], 00:12:04.545 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:12:04.545 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 355], 95.00th=[ 392], 00:12:04.545 | 99.00th=[ 453], 99.50th=[ 486], 99.90th=[ 668], 99.95th=[ 955], 00:12:04.545 | 99.99th=[ 1614] 00:12:04.545 bw ( KiB/s): min=10568, max=13448, per=21.28%, avg=12309.33, stdev=1070.57, samples=6 00:12:04.545 iops : min= 2642, max= 3362, avg=3077.33, stdev=267.64, samples=6 00:12:04.545 lat (usec) : 250=22.52%, 500=77.09%, 750=0.30%, 1000=0.05% 00:12:04.545 lat (msec) : 2=0.03%, 4=0.01% 00:12:04.545 cpu : usr=1.06%, sys=4.78%, ctx=11776, majf=0, minf=1 00:12:04.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.545 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.545 issued rwts: total=11768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.545 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77333: Mon Jul 15 12:56:16 2024 00:12:04.545 read: IOPS=4820, BW=18.8MiB/s (19.7MB/s)(72.7MiB/3862msec) 00:12:04.545 slat (usec): min=13, max=14763, avg=21.64, stdev=179.48 00:12:04.545 clat (usec): min=143, max=16048, avg=184.05, stdev=122.34 00:12:04.545 lat (usec): min=157, max=16063, avg=205.69, stdev=218.43 00:12:04.545 clat percentiles (usec): 00:12:04.545 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:12:04.545 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:12:04.545 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 221], 00:12:04.545 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 363], 99.95th=[ 758], 00:12:04.545 | 99.99th=[ 3064] 00:12:04.545 bw ( KiB/s): min=17270, max=20472, per=33.15%, avg=19178.00, stdev=1161.57, samples=7 00:12:04.545 iops : min= 4317, max= 5118, avg=4794.43, stdev=290.53, samples=7 00:12:04.545 lat (usec) : 250=98.25%, 500=1.67%, 750=0.03%, 1000=0.02% 00:12:04.545 lat (msec) : 2=0.02%, 4=0.01%, 20=0.01% 00:12:04.545 cpu : usr=1.74%, sys=7.98%, ctx=18633, majf=0, minf=1 00:12:04.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.545 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.545 issued rwts: total=18618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.545 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77335: Mon Jul 15 12:56:16 2024 00:12:04.545 read: IOPS=4895, BW=19.1MiB/s (20.1MB/s)(62.9MiB/3287msec) 00:12:04.545 slat (usec): min=13, max=14188, avg=20.91, stdev=155.64 00:12:04.545 clat (usec): min=141, max=3724, avg=181.47, stdev=47.08 00:12:04.545 lat (usec): min=168, max=14407, avg=202.39, stdev=163.10 00:12:04.545 clat percentiles (usec): 00:12:04.545 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:12:04.545 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:12:04.545 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 210], 00:12:04.545 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 562], 99.95th=[ 685], 00:12:04.545 | 99.99th=[ 2999] 00:12:04.545 bw ( KiB/s): min=19384, max=20184, per=34.37%, avg=19882.67, stdev=309.95, samples=6 00:12:04.545 iops : min= 4846, max= 5046, avg=4970.67, stdev=77.49, samples=6 00:12:04.545 lat (usec) : 250=98.60%, 500=1.26%, 750=0.10%, 1000=0.02% 00:12:04.545 lat (msec) : 4=0.02% 00:12:04.545 cpu : usr=1.67%, sys=7.64%, ctx=16096, majf=0, minf=1 00:12:04.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.545 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.545 issued rwts: total=16093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.545 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77336: Mon Jul 15 12:56:16 2024 00:12:04.545 read: IOPS=3079, BW=12.0MiB/s (12.6MB/s)(36.6MiB/3045msec) 00:12:04.545 slat (nsec): min=8934, max=95681, avg=17540.25, stdev=6125.42 00:12:04.545 clat (usec): min=228, max=2570, avg=305.50, stdev=53.59 00:12:04.545 lat (usec): min=238, max=2590, avg=323.04, stdev=53.93 00:12:04.545 clat percentiles (usec): 00:12:04.545 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 273], 00:12:04.545 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:12:04.545 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 371], 95.00th=[ 400], 00:12:04.545 | 99.00th=[ 461], 99.50th=[ 498], 99.90th=[ 619], 99.95th=[ 775], 00:12:04.545 | 99.99th=[ 2573] 00:12:04.545 bw ( KiB/s): min=10568, max=13448, per=21.28%, avg=12309.33, stdev=1072.27, samples=6 00:12:04.545 iops : min= 2642, max= 3362, avg=3077.33, stdev=268.07, samples=6 00:12:04.545 lat (usec) : 250=5.21%, 500=94.28%, 750=0.44%, 1000=0.03% 00:12:04.545 lat (msec) : 2=0.01%, 4=0.01% 00:12:04.545 cpu : usr=1.64%, sys=4.53%, ctx=9377, majf=0, minf=1 00:12:04.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.546 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.546 issued rwts: total=9377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.546 00:12:04.546 Run status group 0 (all jobs): 00:12:04.546 READ: bw=56.5MiB/s (59.2MB/s), 12.0MiB/s-19.1MiB/s (12.6MB/s-20.1MB/s), io=218MiB (229MB), run=3045-3862msec 00:12:04.546 00:12:04.546 Disk stats (read/write): 00:12:04.546 nvme0n1: ios=10741/0, merge=0/0, ticks=3048/0, in_queue=3048, util=95.19% 00:12:04.546 nvme0n2: ios=17273/0, merge=0/0, ticks=3267/0, in_queue=3267, util=95.45% 00:12:04.546 nvme0n3: ios=15352/0, merge=0/0, ticks=2866/0, in_queue=2866, util=96.12% 00:12:04.546 nvme0n4: ios=8793/0, merge=0/0, ticks=2647/0, in_queue=2647, util=96.69% 00:12:04.546 12:56:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.546 12:56:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:04.803 12:56:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:04.803 12:56:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:05.061 12:56:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:05.061 12:56:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:05.319 12:56:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:05.319 12:56:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:05.577 12:56:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:05.577 12:56:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77288 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:05.835 nvmf hotplug test: fio failed as expected 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:05.835 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # nvmfcleanup 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.120 rmmod nvme_tcp 00:12:06.120 rmmod nvme_fabrics 00:12:06.120 rmmod nvme_keyring 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@493 -- # '[' -n 76808 ']' 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@494 -- # killprocess 76808 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76808 ']' 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76808 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76808 00:12:06.120 killing process with pid 76808 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76808' 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76808 00:12:06.120 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76808 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@282 -- # remove_spdk_ns 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:12:06.379 00:12:06.379 real 0m19.212s 00:12:06.379 user 1m13.861s 00:12:06.379 sys 0m9.354s 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.379 12:56:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.379 ************************************ 00:12:06.379 END TEST nvmf_fio_target 00:12:06.379 ************************************ 00:12:06.379 12:56:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:06.379 12:56:18 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:06.379 12:56:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:06.379 12:56:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.379 12:56:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.379 ************************************ 00:12:06.379 START TEST nvmf_bdevio 00:12:06.379 ************************************ 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:06.379 * Looking for test storage... 00:12:06.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.379 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.637 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:12:06.637 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@452 -- # prepare_net_devs 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # local -g is_hw=no 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # remove_spdk_ns 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@436 -- # nvmf_veth_init 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:12:06.638 Cannot find device "nvmf_tgt_br" 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:12:06.638 Cannot find device "nvmf_tgt_br2" 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # true 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:12:06.638 Cannot find device "nvmf_tgt_br" 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:12:06.638 Cannot find device "nvmf_tgt_br2" 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:12:06.638 12:56:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:06.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:06.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:12:06.638 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:12:06.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:12:06.896 00:12:06.896 --- 10.0.0.2 ping statistics --- 00:12:06.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.896 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:12:06.896 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:06.896 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:12:06.896 00:12:06.896 --- 10.0.0.3 ping statistics --- 00:12:06.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.896 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:06.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:06.896 00:12:06.896 --- 10.0.0.1 ping statistics --- 00:12:06.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.896 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@437 -- # return 0 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.896 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@485 -- # nvmfpid=77657 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@486 -- # waitforlisten 77657 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77657 ']' 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.897 12:56:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.897 [2024-07-15 12:56:19.274211] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:12:06.897 [2024-07-15 12:56:19.274298] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.155 [2024-07-15 12:56:19.410518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.155 [2024-07-15 12:56:19.497627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.155 [2024-07-15 12:56:19.498107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.155 [2024-07-15 12:56:19.498629] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.155 [2024-07-15 12:56:19.499125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.155 [2024-07-15 12:56:19.499384] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.155 [2024-07-15 12:56:19.499782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:07.155 [2024-07-15 12:56:19.499902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:07.155 [2024-07-15 12:56:19.499980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:07.155 [2024-07-15 12:56:19.500214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 [2024-07-15 12:56:20.336732] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 Malloc0 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 [2024-07-15 12:56:20.394809] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@536 -- # config=() 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@536 -- # local subsystem config 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:12:08.087 { 00:12:08.087 "params": { 00:12:08.087 "name": "Nvme$subsystem", 00:12:08.087 "trtype": "$TEST_TRANSPORT", 00:12:08.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:08.087 "adrfam": "ipv4", 00:12:08.087 "trsvcid": "$NVMF_PORT", 00:12:08.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:08.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:08.087 "hdgst": ${hdgst:-false}, 00:12:08.087 "ddgst": ${ddgst:-false} 00:12:08.087 }, 00:12:08.087 "method": "bdev_nvme_attach_controller" 00:12:08.087 } 00:12:08.087 EOF 00:12:08.087 )") 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # cat 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@560 -- # jq . 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@561 -- # IFS=, 00:12:08.087 12:56:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:12:08.087 "params": { 00:12:08.087 "name": "Nvme1", 00:12:08.087 "trtype": "tcp", 00:12:08.087 "traddr": "10.0.0.2", 00:12:08.087 "adrfam": "ipv4", 00:12:08.087 "trsvcid": "4420", 00:12:08.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:08.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:08.087 "hdgst": false, 00:12:08.087 "ddgst": false 00:12:08.087 }, 00:12:08.087 "method": "bdev_nvme_attach_controller" 00:12:08.087 }' 00:12:08.087 [2024-07-15 12:56:20.454428] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:12:08.087 [2024-07-15 12:56:20.454713] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77711 ] 00:12:08.344 [2024-07-15 12:56:20.595969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.344 [2024-07-15 12:56:20.676802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.344 [2024-07-15 12:56:20.676884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.344 [2024-07-15 12:56:20.676909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.601 I/O targets: 00:12:08.601 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:08.601 00:12:08.601 00:12:08.601 CUnit - A unit testing framework for C - Version 2.1-3 00:12:08.601 http://cunit.sourceforge.net/ 00:12:08.601 00:12:08.601 00:12:08.601 Suite: bdevio tests on: Nvme1n1 00:12:08.602 Test: blockdev write read block ...passed 00:12:08.602 Test: blockdev write zeroes read block ...passed 00:12:08.602 Test: blockdev write zeroes read no split ...passed 00:12:08.602 Test: blockdev write zeroes read split ...passed 00:12:08.602 Test: blockdev write zeroes read split partial ...passed 00:12:08.602 Test: blockdev reset ...[2024-07-15 12:56:20.942051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:08.602 [2024-07-15 12:56:20.942353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8d180 (9): Bad file descriptor 00:12:08.602 [2024-07-15 12:56:20.955193] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:08.602 passed 00:12:08.602 Test: blockdev write read 8 blocks ...passed 00:12:08.602 Test: blockdev write read size > 128k ...passed 00:12:08.602 Test: blockdev write read invalid size ...passed 00:12:08.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.602 Test: blockdev write read max offset ...passed 00:12:08.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.859 Test: blockdev writev readv 8 blocks ...passed 00:12:08.859 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.859 Test: blockdev writev readv block ...passed 00:12:08.859 Test: blockdev writev readv size > 128k ...passed 00:12:08.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.859 Test: blockdev comparev and writev ...[2024-07-15 12:56:21.128411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.859 [2024-07-15 12:56:21.128601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:08.859 [2024-07-15 12:56:21.128642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.859 [2024-07-15 12:56:21.128660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:08.859 [2024-07-15 12:56:21.129014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.859 [2024-07-15 12:56:21.129050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:08.859 [2024-07-15 12:56:21.129080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.859 [2024-07-15 12:56:21.129100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:08.859 [2024-07-15 12:56:21.129427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.859 [2024-07-15 12:56:21.129464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:08.859 [2024-07-15 12:56:21.129500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.859 [2024-07-15 12:56:21.129519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:08.859 [2024-07-15 12:56:21.129922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.859 [2024-07-15 12:56:21.129965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:08.860 [2024-07-15 12:56:21.129997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:08.860 [2024-07-15 12:56:21.130017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:08.860 passed 00:12:08.860 Test: blockdev nvme passthru rw ...passed 00:12:08.860 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:56:21.212339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:08.860 [2024-07-15 12:56:21.212378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:08.860 [2024-07-15 12:56:21.212530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:08.860 [2024-07-15 12:56:21.212564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:08.860 [2024-07-15 12:56:21.212715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:08.860 [2024-07-15 12:56:21.212741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:08.860 passed 00:12:08.860 Test: blockdev nvme admin passthru ...[2024-07-15 12:56:21.212910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:08.860 [2024-07-15 12:56:21.212945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:08.860 passed 00:12:08.860 Test: blockdev copy ...passed 00:12:08.860 00:12:08.860 Run Summary: Type Total Ran Passed Failed Inactive 00:12:08.860 suites 1 1 n/a 0 0 00:12:08.860 tests 23 23 23 0 0 00:12:08.860 asserts 152 152 152 0 n/a 00:12:08.860 00:12:08.860 Elapsed time = 0.900 seconds 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # nvmfcleanup 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.117 rmmod nvme_tcp 00:12:09.117 rmmod nvme_fabrics 00:12:09.117 rmmod nvme_keyring 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@493 -- # '[' -n 77657 ']' 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@494 -- # killprocess 77657 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77657 ']' 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77657 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77657 00:12:09.117 killing process with pid 77657 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77657' 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77657 00:12:09.117 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77657 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@282 -- # remove_spdk_ns 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:12:09.375 00:12:09.375 real 0m3.022s 00:12:09.375 user 0m10.865s 00:12:09.375 sys 0m0.668s 00:12:09.375 ************************************ 00:12:09.375 END TEST nvmf_bdevio 00:12:09.375 ************************************ 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.375 12:56:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.375 12:56:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:09.375 12:56:21 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:09.375 12:56:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:09.375 12:56:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.375 12:56:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.375 ************************************ 00:12:09.375 START TEST nvmf_auth_target 00:12:09.375 ************************************ 00:12:09.375 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:09.633 * Looking for test storage... 00:12:09.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:09.633 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@452 -- # prepare_net_devs 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # local -g is_hw=no 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # remove_spdk_ns 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@436 -- # nvmf_veth_init 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:12:09.634 Cannot find device "nvmf_tgt_br" 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.634 Cannot find device "nvmf_tgt_br2" 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # true 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:12:09.634 Cannot find device "nvmf_tgt_br" 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:12:09.634 Cannot find device "nvmf_tgt_br2" 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:09.634 12:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:09.634 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:12:09.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:09.892 00:12:09.892 --- 10.0.0.2 ping statistics --- 00:12:09.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.892 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:12:09.892 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:09.892 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:12:09.892 00:12:09.892 --- 10.0.0.3 ping statistics --- 00:12:09.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.892 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:09.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:12:09.892 00:12:09.892 --- 10.0.0.1 ping statistics --- 00:12:09.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.892 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@437 -- # return 0 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.892 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@485 -- # nvmfpid=77890 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@486 -- # waitforlisten 77890 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77890 ']' 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.893 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77915 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=null 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # len=48 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # key=9d046345465d9be523afdd70593b05ee1845303fe1d093f2 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.Mip 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 9d046345465d9be523afdd70593b05ee1845303fe1d093f2 0 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 9d046345465d9be523afdd70593b05ee1845303fe1d093f2 0 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # key=9d046345465d9be523afdd70593b05ee1845303fe1d093f2 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=0 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.Mip 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.Mip 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Mip 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha512 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # len=64 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # key=2a69f831266e88d17fb09d631e49f7e3415c83fb1acfd698b4d7cae33ad48309 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.ieJ 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 2a69f831266e88d17fb09d631e49f7e3415c83fb1acfd698b4d7cae33ad48309 3 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 2a69f831266e88d17fb09d631e49f7e3415c83fb1acfd698b4d7cae33ad48309 3 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:12:10.460 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # key=2a69f831266e88d17fb09d631e49f7e3415c83fb1acfd698b4d7cae33ad48309 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=3 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.ieJ 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.ieJ 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ieJ 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha256 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # len=32 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # key=6e3a48cae24e89b3570608c84c7f15e4 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.ayD 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 6e3a48cae24e89b3570608c84c7f15e4 1 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 6e3a48cae24e89b3570608c84c7f15e4 1 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # key=6e3a48cae24e89b3570608c84c7f15e4 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=1 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.ayD 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.ayD 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.ayD 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha384 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # len=48 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # key=07f48031fa8136b2b37d66c74e04fa82c2ce5740bf9d8eac 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.Kpb 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 07f48031fa8136b2b37d66c74e04fa82c2ce5740bf9d8eac 2 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 07f48031fa8136b2b37d66c74e04fa82c2ce5740bf9d8eac 2 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # key=07f48031fa8136b2b37d66c74e04fa82c2ce5740bf9d8eac 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=2 00:12:10.461 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.Kpb 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.Kpb 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Kpb 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha384 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # len=48 00:12:10.719 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # key=8bcb8490bf060c46e9d3d8891812e6d535706db70947d956 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.LW9 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 8bcb8490bf060c46e9d3d8891812e6d535706db70947d956 2 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 8bcb8490bf060c46e9d3d8891812e6d535706db70947d956 2 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # key=8bcb8490bf060c46e9d3d8891812e6d535706db70947d956 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=2 00:12:10.720 12:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.LW9 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.LW9 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.LW9 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha256 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # len=32 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # key=87f7f30fcf843ddf5212cf6021a79354 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.9vQ 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 87f7f30fcf843ddf5212cf6021a79354 1 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 87f7f30fcf843ddf5212cf6021a79354 1 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # key=87f7f30fcf843ddf5212cf6021a79354 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=1 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.9vQ 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.9vQ 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.9vQ 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha512 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # len=64 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@731 -- # key=c2068446db63b8da0379d4ff3fccfb74920ac9a5ef9912582a19a3eb31a33e20 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.gGL 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key c2068446db63b8da0379d4ff3fccfb74920ac9a5ef9912582a19a3eb31a33e20 3 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 c2068446db63b8da0379d4ff3fccfb74920ac9a5ef9912582a19a3eb31a33e20 3 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # key=c2068446db63b8da0379d4ff3fccfb74920ac9a5ef9912582a19a3eb31a33e20 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=3 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.gGL 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.gGL 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.gGL 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77890 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77890 ']' 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.720 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.979 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.979 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:10.979 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77915 /var/tmp/host.sock 00:12:10.979 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77915 ']' 00:12:10.979 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:10.979 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:10.979 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:10.979 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.979 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mip 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Mip 00:12:11.546 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Mip 00:12:11.804 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ieJ ]] 00:12:11.804 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ieJ 00:12:11.804 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.804 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.804 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.804 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ieJ 00:12:11.804 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ieJ 00:12:12.062 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:12.062 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ayD 00:12:12.062 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.062 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.062 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.062 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ayD 00:12:12.062 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ayD 00:12:12.321 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Kpb ]] 00:12:12.321 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kpb 00:12:12.321 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.321 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.321 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.321 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kpb 00:12:12.321 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kpb 00:12:12.579 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:12.579 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LW9 00:12:12.579 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.579 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.579 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.579 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.LW9 00:12:12.579 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.LW9 00:12:12.838 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.9vQ ]] 00:12:12.838 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9vQ 00:12:12.838 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.838 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.838 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.838 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9vQ 00:12:12.838 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9vQ 00:12:13.405 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:13.405 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gGL 00:12:13.405 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.405 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.405 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.405 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gGL 00:12:13.405 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gGL 00:12:13.664 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:12:13.664 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:13.664 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.664 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.664 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:13.664 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.923 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.181 00:12:14.182 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.182 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.182 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.439 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.439 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.439 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.439 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.439 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.439 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.439 { 00:12:14.439 "auth": { 00:12:14.439 "dhgroup": "null", 00:12:14.439 "digest": "sha256", 00:12:14.439 "state": "completed" 00:12:14.439 }, 00:12:14.439 "cntlid": 1, 00:12:14.439 "listen_address": { 00:12:14.439 "adrfam": "IPv4", 00:12:14.439 "traddr": "10.0.0.2", 00:12:14.439 "trsvcid": "4420", 00:12:14.439 "trtype": "TCP" 00:12:14.439 }, 00:12:14.439 "peer_address": { 00:12:14.439 "adrfam": "IPv4", 00:12:14.439 "traddr": "10.0.0.1", 00:12:14.439 "trsvcid": "34018", 00:12:14.439 "trtype": "TCP" 00:12:14.439 }, 00:12:14.439 "qid": 0, 00:12:14.439 "state": "enabled", 00:12:14.439 "thread": "nvmf_tgt_poll_group_000" 00:12:14.439 } 00:12:14.439 ]' 00:12:14.439 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.439 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.439 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.698 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:14.698 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.698 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.698 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.698 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.956 12:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:12:20.287 12:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.287 12:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:20.287 12:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.287 12:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.287 12:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.287 12:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.287 12:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:20.287 12:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:20.287 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:12:20.287 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.287 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:20.287 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:20.287 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:20.287 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.287 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.288 12:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.288 12:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.288 12:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.288 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.288 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.288 00:12:20.288 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.288 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.288 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.546 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.546 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.546 12:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.546 12:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.546 12:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.546 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.546 { 00:12:20.546 "auth": { 00:12:20.546 "dhgroup": "null", 00:12:20.546 "digest": "sha256", 00:12:20.546 "state": "completed" 00:12:20.546 }, 00:12:20.546 "cntlid": 3, 00:12:20.546 "listen_address": { 00:12:20.546 "adrfam": "IPv4", 00:12:20.546 "traddr": "10.0.0.2", 00:12:20.546 "trsvcid": "4420", 00:12:20.546 "trtype": "TCP" 00:12:20.546 }, 00:12:20.546 "peer_address": { 00:12:20.546 "adrfam": "IPv4", 00:12:20.546 "traddr": "10.0.0.1", 00:12:20.546 "trsvcid": "34046", 00:12:20.546 "trtype": "TCP" 00:12:20.546 }, 00:12:20.546 "qid": 0, 00:12:20.546 "state": "enabled", 00:12:20.546 "thread": "nvmf_tgt_poll_group_000" 00:12:20.546 } 00:12:20.546 ]' 00:12:20.546 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.546 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.546 12:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.804 12:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:20.804 12:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.804 12:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.804 12:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.804 12:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.062 12:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:12:21.627 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.627 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:21.627 12:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.627 12:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.627 12:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.627 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.627 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:21.627 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.193 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.452 00:12:22.452 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.452 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.452 12:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.710 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.710 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.710 12:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.710 12:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.710 12:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.710 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.710 { 00:12:22.710 "auth": { 00:12:22.710 "dhgroup": "null", 00:12:22.710 "digest": "sha256", 00:12:22.710 "state": "completed" 00:12:22.710 }, 00:12:22.710 "cntlid": 5, 00:12:22.710 "listen_address": { 00:12:22.710 "adrfam": "IPv4", 00:12:22.710 "traddr": "10.0.0.2", 00:12:22.710 "trsvcid": "4420", 00:12:22.710 "trtype": "TCP" 00:12:22.710 }, 00:12:22.710 "peer_address": { 00:12:22.710 "adrfam": "IPv4", 00:12:22.710 "traddr": "10.0.0.1", 00:12:22.710 "trsvcid": "57192", 00:12:22.710 "trtype": "TCP" 00:12:22.710 }, 00:12:22.710 "qid": 0, 00:12:22.710 "state": "enabled", 00:12:22.710 "thread": "nvmf_tgt_poll_group_000" 00:12:22.710 } 00:12:22.710 ]' 00:12:22.710 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.710 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:22.710 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.969 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:22.969 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.969 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.969 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.969 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.226 12:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:12:23.791 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.049 12:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.306 12:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.306 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.306 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.563 00:12:24.563 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.563 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.563 12:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.821 { 00:12:24.821 "auth": { 00:12:24.821 "dhgroup": "null", 00:12:24.821 "digest": "sha256", 00:12:24.821 "state": "completed" 00:12:24.821 }, 00:12:24.821 "cntlid": 7, 00:12:24.821 "listen_address": { 00:12:24.821 "adrfam": "IPv4", 00:12:24.821 "traddr": "10.0.0.2", 00:12:24.821 "trsvcid": "4420", 00:12:24.821 "trtype": "TCP" 00:12:24.821 }, 00:12:24.821 "peer_address": { 00:12:24.821 "adrfam": "IPv4", 00:12:24.821 "traddr": "10.0.0.1", 00:12:24.821 "trsvcid": "57226", 00:12:24.821 "trtype": "TCP" 00:12:24.821 }, 00:12:24.821 "qid": 0, 00:12:24.821 "state": "enabled", 00:12:24.821 "thread": "nvmf_tgt_poll_group_000" 00:12:24.821 } 00:12:24.821 ]' 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.821 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.079 12:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:12:26.012 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.012 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:26.012 12:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.012 12:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.012 12:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.012 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:26.012 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.012 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:26.012 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.270 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.527 00:12:26.527 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.527 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.527 12:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.786 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.786 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.786 12:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.786 12:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.786 12:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.786 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.786 { 00:12:26.786 "auth": { 00:12:26.786 "dhgroup": "ffdhe2048", 00:12:26.786 "digest": "sha256", 00:12:26.786 "state": "completed" 00:12:26.786 }, 00:12:26.786 "cntlid": 9, 00:12:26.786 "listen_address": { 00:12:26.786 "adrfam": "IPv4", 00:12:26.786 "traddr": "10.0.0.2", 00:12:26.786 "trsvcid": "4420", 00:12:26.786 "trtype": "TCP" 00:12:26.786 }, 00:12:26.786 "peer_address": { 00:12:26.786 "adrfam": "IPv4", 00:12:26.786 "traddr": "10.0.0.1", 00:12:26.786 "trsvcid": "57256", 00:12:26.786 "trtype": "TCP" 00:12:26.786 }, 00:12:26.786 "qid": 0, 00:12:26.786 "state": "enabled", 00:12:26.786 "thread": "nvmf_tgt_poll_group_000" 00:12:26.786 } 00:12:26.786 ]' 00:12:26.786 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.043 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.043 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.043 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:27.043 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.043 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.043 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.043 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.301 12:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:12:28.232 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.232 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:28.232 12:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.232 12:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.232 12:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.232 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.232 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:28.232 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.489 12:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.746 00:12:28.746 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.746 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.746 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.004 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.004 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.004 12:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.004 12:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.004 12:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.004 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.004 { 00:12:29.004 "auth": { 00:12:29.004 "dhgroup": "ffdhe2048", 00:12:29.004 "digest": "sha256", 00:12:29.004 "state": "completed" 00:12:29.004 }, 00:12:29.004 "cntlid": 11, 00:12:29.004 "listen_address": { 00:12:29.004 "adrfam": "IPv4", 00:12:29.004 "traddr": "10.0.0.2", 00:12:29.004 "trsvcid": "4420", 00:12:29.004 "trtype": "TCP" 00:12:29.004 }, 00:12:29.004 "peer_address": { 00:12:29.004 "adrfam": "IPv4", 00:12:29.004 "traddr": "10.0.0.1", 00:12:29.004 "trsvcid": "57292", 00:12:29.004 "trtype": "TCP" 00:12:29.004 }, 00:12:29.004 "qid": 0, 00:12:29.004 "state": "enabled", 00:12:29.004 "thread": "nvmf_tgt_poll_group_000" 00:12:29.004 } 00:12:29.004 ]' 00:12:29.004 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.262 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.262 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.262 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:29.262 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.262 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.262 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.262 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.520 12:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:12:30.454 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.454 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:30.454 12:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.454 12:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.455 12:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.019 00:12:31.019 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.019 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.019 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.276 { 00:12:31.276 "auth": { 00:12:31.276 "dhgroup": "ffdhe2048", 00:12:31.276 "digest": "sha256", 00:12:31.276 "state": "completed" 00:12:31.276 }, 00:12:31.276 "cntlid": 13, 00:12:31.276 "listen_address": { 00:12:31.276 "adrfam": "IPv4", 00:12:31.276 "traddr": "10.0.0.2", 00:12:31.276 "trsvcid": "4420", 00:12:31.276 "trtype": "TCP" 00:12:31.276 }, 00:12:31.276 "peer_address": { 00:12:31.276 "adrfam": "IPv4", 00:12:31.276 "traddr": "10.0.0.1", 00:12:31.276 "trsvcid": "57320", 00:12:31.276 "trtype": "TCP" 00:12:31.276 }, 00:12:31.276 "qid": 0, 00:12:31.276 "state": "enabled", 00:12:31.276 "thread": "nvmf_tgt_poll_group_000" 00:12:31.276 } 00:12:31.276 ]' 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.276 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.534 12:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.466 12:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.725 12:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.725 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.725 12:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.982 00:12:32.982 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.982 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.982 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.239 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.239 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.239 12:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.239 12:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.239 12:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.239 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.239 { 00:12:33.239 "auth": { 00:12:33.239 "dhgroup": "ffdhe2048", 00:12:33.239 "digest": "sha256", 00:12:33.239 "state": "completed" 00:12:33.239 }, 00:12:33.240 "cntlid": 15, 00:12:33.240 "listen_address": { 00:12:33.240 "adrfam": "IPv4", 00:12:33.240 "traddr": "10.0.0.2", 00:12:33.240 "trsvcid": "4420", 00:12:33.240 "trtype": "TCP" 00:12:33.240 }, 00:12:33.240 "peer_address": { 00:12:33.240 "adrfam": "IPv4", 00:12:33.240 "traddr": "10.0.0.1", 00:12:33.240 "trsvcid": "33356", 00:12:33.240 "trtype": "TCP" 00:12:33.240 }, 00:12:33.240 "qid": 0, 00:12:33.240 "state": "enabled", 00:12:33.240 "thread": "nvmf_tgt_poll_group_000" 00:12:33.240 } 00:12:33.240 ]' 00:12:33.240 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.240 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.240 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.240 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:33.240 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.497 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.497 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.497 12:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.754 12:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:12:34.687 12:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.687 12:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:34.687 12:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.687 12:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.687 12:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.687 12:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.687 12:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.687 12:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:34.687 12:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:34.687 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:12:34.687 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.687 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:34.687 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:34.687 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:34.687 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.688 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.688 12:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.688 12:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.688 12:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.688 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.688 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.253 00:12:35.254 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.254 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.254 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.511 { 00:12:35.511 "auth": { 00:12:35.511 "dhgroup": "ffdhe3072", 00:12:35.511 "digest": "sha256", 00:12:35.511 "state": "completed" 00:12:35.511 }, 00:12:35.511 "cntlid": 17, 00:12:35.511 "listen_address": { 00:12:35.511 "adrfam": "IPv4", 00:12:35.511 "traddr": "10.0.0.2", 00:12:35.511 "trsvcid": "4420", 00:12:35.511 "trtype": "TCP" 00:12:35.511 }, 00:12:35.511 "peer_address": { 00:12:35.511 "adrfam": "IPv4", 00:12:35.511 "traddr": "10.0.0.1", 00:12:35.511 "trsvcid": "33374", 00:12:35.511 "trtype": "TCP" 00:12:35.511 }, 00:12:35.511 "qid": 0, 00:12:35.511 "state": "enabled", 00:12:35.511 "thread": "nvmf_tgt_poll_group_000" 00:12:35.511 } 00:12:35.511 ]' 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.511 12:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.076 12:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:12:36.642 12:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.642 12:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:36.642 12:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.642 12:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.642 12:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.642 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.642 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:36.642 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.901 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.466 00:12:37.466 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.466 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.466 12:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.724 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.724 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.724 12:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.724 12:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.724 12:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.724 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.724 { 00:12:37.724 "auth": { 00:12:37.724 "dhgroup": "ffdhe3072", 00:12:37.724 "digest": "sha256", 00:12:37.724 "state": "completed" 00:12:37.724 }, 00:12:37.724 "cntlid": 19, 00:12:37.724 "listen_address": { 00:12:37.724 "adrfam": "IPv4", 00:12:37.724 "traddr": "10.0.0.2", 00:12:37.724 "trsvcid": "4420", 00:12:37.724 "trtype": "TCP" 00:12:37.724 }, 00:12:37.724 "peer_address": { 00:12:37.725 "adrfam": "IPv4", 00:12:37.725 "traddr": "10.0.0.1", 00:12:37.725 "trsvcid": "33396", 00:12:37.725 "trtype": "TCP" 00:12:37.725 }, 00:12:37.725 "qid": 0, 00:12:37.725 "state": "enabled", 00:12:37.725 "thread": "nvmf_tgt_poll_group_000" 00:12:37.725 } 00:12:37.725 ]' 00:12:37.725 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.725 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.725 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.725 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:37.725 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.012 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.012 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.012 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.291 12:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:12:38.866 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.866 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:38.866 12:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.866 12:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.866 12:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.866 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.866 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:38.866 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.125 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.692 00:12:39.692 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.692 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.692 12:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.951 { 00:12:39.951 "auth": { 00:12:39.951 "dhgroup": "ffdhe3072", 00:12:39.951 "digest": "sha256", 00:12:39.951 "state": "completed" 00:12:39.951 }, 00:12:39.951 "cntlid": 21, 00:12:39.951 "listen_address": { 00:12:39.951 "adrfam": "IPv4", 00:12:39.951 "traddr": "10.0.0.2", 00:12:39.951 "trsvcid": "4420", 00:12:39.951 "trtype": "TCP" 00:12:39.951 }, 00:12:39.951 "peer_address": { 00:12:39.951 "adrfam": "IPv4", 00:12:39.951 "traddr": "10.0.0.1", 00:12:39.951 "trsvcid": "33420", 00:12:39.951 "trtype": "TCP" 00:12:39.951 }, 00:12:39.951 "qid": 0, 00:12:39.951 "state": "enabled", 00:12:39.951 "thread": "nvmf_tgt_poll_group_000" 00:12:39.951 } 00:12:39.951 ]' 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:39.951 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.209 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.209 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.209 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.468 12:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:12:41.032 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.290 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:41.290 12:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.290 12:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.290 12:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.290 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.290 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:41.290 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.549 12:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.807 00:12:41.807 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.807 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.807 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.374 { 00:12:42.374 "auth": { 00:12:42.374 "dhgroup": "ffdhe3072", 00:12:42.374 "digest": "sha256", 00:12:42.374 "state": "completed" 00:12:42.374 }, 00:12:42.374 "cntlid": 23, 00:12:42.374 "listen_address": { 00:12:42.374 "adrfam": "IPv4", 00:12:42.374 "traddr": "10.0.0.2", 00:12:42.374 "trsvcid": "4420", 00:12:42.374 "trtype": "TCP" 00:12:42.374 }, 00:12:42.374 "peer_address": { 00:12:42.374 "adrfam": "IPv4", 00:12:42.374 "traddr": "10.0.0.1", 00:12:42.374 "trsvcid": "33450", 00:12:42.374 "trtype": "TCP" 00:12:42.374 }, 00:12:42.374 "qid": 0, 00:12:42.374 "state": "enabled", 00:12:42.374 "thread": "nvmf_tgt_poll_group_000" 00:12:42.374 } 00:12:42.374 ]' 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.374 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.632 12:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.564 12:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.128 00:12:44.128 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.128 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.128 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.427 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.427 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.427 12:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.427 12:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.427 12:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.428 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.428 { 00:12:44.428 "auth": { 00:12:44.428 "dhgroup": "ffdhe4096", 00:12:44.428 "digest": "sha256", 00:12:44.428 "state": "completed" 00:12:44.428 }, 00:12:44.428 "cntlid": 25, 00:12:44.428 "listen_address": { 00:12:44.428 "adrfam": "IPv4", 00:12:44.428 "traddr": "10.0.0.2", 00:12:44.428 "trsvcid": "4420", 00:12:44.428 "trtype": "TCP" 00:12:44.428 }, 00:12:44.428 "peer_address": { 00:12:44.428 "adrfam": "IPv4", 00:12:44.428 "traddr": "10.0.0.1", 00:12:44.428 "trsvcid": "59228", 00:12:44.428 "trtype": "TCP" 00:12:44.428 }, 00:12:44.428 "qid": 0, 00:12:44.428 "state": "enabled", 00:12:44.428 "thread": "nvmf_tgt_poll_group_000" 00:12:44.428 } 00:12:44.428 ]' 00:12:44.428 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.428 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:44.428 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.428 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:44.428 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.428 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.428 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.428 12:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.999 12:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.932 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.497 00:12:46.497 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.497 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.497 12:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.755 { 00:12:46.755 "auth": { 00:12:46.755 "dhgroup": "ffdhe4096", 00:12:46.755 "digest": "sha256", 00:12:46.755 "state": "completed" 00:12:46.755 }, 00:12:46.755 "cntlid": 27, 00:12:46.755 "listen_address": { 00:12:46.755 "adrfam": "IPv4", 00:12:46.755 "traddr": "10.0.0.2", 00:12:46.755 "trsvcid": "4420", 00:12:46.755 "trtype": "TCP" 00:12:46.755 }, 00:12:46.755 "peer_address": { 00:12:46.755 "adrfam": "IPv4", 00:12:46.755 "traddr": "10.0.0.1", 00:12:46.755 "trsvcid": "59244", 00:12:46.755 "trtype": "TCP" 00:12:46.755 }, 00:12:46.755 "qid": 0, 00:12:46.755 "state": "enabled", 00:12:46.755 "thread": "nvmf_tgt_poll_group_000" 00:12:46.755 } 00:12:46.755 ]' 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.755 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.323 12:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:12:47.903 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.903 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:47.903 12:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.903 12:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.903 12:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.903 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.903 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:47.903 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.161 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.728 00:12:48.728 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.728 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.728 12:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.986 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.986 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.986 12:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.986 12:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.987 { 00:12:48.987 "auth": { 00:12:48.987 "dhgroup": "ffdhe4096", 00:12:48.987 "digest": "sha256", 00:12:48.987 "state": "completed" 00:12:48.987 }, 00:12:48.987 "cntlid": 29, 00:12:48.987 "listen_address": { 00:12:48.987 "adrfam": "IPv4", 00:12:48.987 "traddr": "10.0.0.2", 00:12:48.987 "trsvcid": "4420", 00:12:48.987 "trtype": "TCP" 00:12:48.987 }, 00:12:48.987 "peer_address": { 00:12:48.987 "adrfam": "IPv4", 00:12:48.987 "traddr": "10.0.0.1", 00:12:48.987 "trsvcid": "59270", 00:12:48.987 "trtype": "TCP" 00:12:48.987 }, 00:12:48.987 "qid": 0, 00:12:48.987 "state": "enabled", 00:12:48.987 "thread": "nvmf_tgt_poll_group_000" 00:12:48.987 } 00:12:48.987 ]' 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.987 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.553 12:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:12:50.120 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.120 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:50.120 12:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.120 12:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.120 12:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.120 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.120 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:50.120 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.379 12:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.943 00:12:50.943 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.943 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.943 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.215 { 00:12:51.215 "auth": { 00:12:51.215 "dhgroup": "ffdhe4096", 00:12:51.215 "digest": "sha256", 00:12:51.215 "state": "completed" 00:12:51.215 }, 00:12:51.215 "cntlid": 31, 00:12:51.215 "listen_address": { 00:12:51.215 "adrfam": "IPv4", 00:12:51.215 "traddr": "10.0.0.2", 00:12:51.215 "trsvcid": "4420", 00:12:51.215 "trtype": "TCP" 00:12:51.215 }, 00:12:51.215 "peer_address": { 00:12:51.215 "adrfam": "IPv4", 00:12:51.215 "traddr": "10.0.0.1", 00:12:51.215 "trsvcid": "59294", 00:12:51.215 "trtype": "TCP" 00:12:51.215 }, 00:12:51.215 "qid": 0, 00:12:51.215 "state": "enabled", 00:12:51.215 "thread": "nvmf_tgt_poll_group_000" 00:12:51.215 } 00:12:51.215 ]' 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.215 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.793 12:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:12:52.359 12:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.359 12:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:52.359 12:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.359 12:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.359 12:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.359 12:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.359 12:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.359 12:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:52.359 12:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:52.617 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:52.617 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.876 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.135 00:12:53.135 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.135 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.135 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.701 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.701 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.701 12:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.701 12:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.701 12:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.701 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.701 { 00:12:53.701 "auth": { 00:12:53.701 "dhgroup": "ffdhe6144", 00:12:53.701 "digest": "sha256", 00:12:53.701 "state": "completed" 00:12:53.701 }, 00:12:53.701 "cntlid": 33, 00:12:53.701 "listen_address": { 00:12:53.701 "adrfam": "IPv4", 00:12:53.701 "traddr": "10.0.0.2", 00:12:53.701 "trsvcid": "4420", 00:12:53.701 "trtype": "TCP" 00:12:53.701 }, 00:12:53.701 "peer_address": { 00:12:53.701 "adrfam": "IPv4", 00:12:53.701 "traddr": "10.0.0.1", 00:12:53.701 "trsvcid": "33104", 00:12:53.701 "trtype": "TCP" 00:12:53.701 }, 00:12:53.701 "qid": 0, 00:12:53.701 "state": "enabled", 00:12:53.701 "thread": "nvmf_tgt_poll_group_000" 00:12:53.701 } 00:12:53.701 ]' 00:12:53.701 12:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.701 12:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.701 12:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.701 12:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:53.701 12:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.701 12:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.701 12:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.701 12:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.959 12:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:12:54.893 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.893 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:54.893 12:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.893 12:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.893 12:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.893 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.893 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:54.893 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.152 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.719 00:12:55.719 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.719 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.719 12:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.977 { 00:12:55.977 "auth": { 00:12:55.977 "dhgroup": "ffdhe6144", 00:12:55.977 "digest": "sha256", 00:12:55.977 "state": "completed" 00:12:55.977 }, 00:12:55.977 "cntlid": 35, 00:12:55.977 "listen_address": { 00:12:55.977 "adrfam": "IPv4", 00:12:55.977 "traddr": "10.0.0.2", 00:12:55.977 "trsvcid": "4420", 00:12:55.977 "trtype": "TCP" 00:12:55.977 }, 00:12:55.977 "peer_address": { 00:12:55.977 "adrfam": "IPv4", 00:12:55.977 "traddr": "10.0.0.1", 00:12:55.977 "trsvcid": "33138", 00:12:55.977 "trtype": "TCP" 00:12:55.977 }, 00:12:55.977 "qid": 0, 00:12:55.977 "state": "enabled", 00:12:55.977 "thread": "nvmf_tgt_poll_group_000" 00:12:55.977 } 00:12:55.977 ]' 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.977 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.544 12:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:12:57.110 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.110 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:57.110 12:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.110 12:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.110 12:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.110 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.110 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:57.110 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.390 12:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.956 00:12:57.956 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.956 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.956 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.214 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.214 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.214 12:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.214 12:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.214 12:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.214 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.214 { 00:12:58.214 "auth": { 00:12:58.214 "dhgroup": "ffdhe6144", 00:12:58.214 "digest": "sha256", 00:12:58.214 "state": "completed" 00:12:58.214 }, 00:12:58.214 "cntlid": 37, 00:12:58.214 "listen_address": { 00:12:58.214 "adrfam": "IPv4", 00:12:58.214 "traddr": "10.0.0.2", 00:12:58.214 "trsvcid": "4420", 00:12:58.214 "trtype": "TCP" 00:12:58.214 }, 00:12:58.214 "peer_address": { 00:12:58.214 "adrfam": "IPv4", 00:12:58.214 "traddr": "10.0.0.1", 00:12:58.214 "trsvcid": "33166", 00:12:58.214 "trtype": "TCP" 00:12:58.214 }, 00:12:58.214 "qid": 0, 00:12:58.214 "state": "enabled", 00:12:58.214 "thread": "nvmf_tgt_poll_group_000" 00:12:58.214 } 00:12:58.214 ]' 00:12:58.214 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.214 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.214 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.473 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:58.473 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.473 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.473 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.473 12:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.731 12:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:12:59.666 12:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.666 12:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:12:59.666 12:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.666 12:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.666 12:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.666 12:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.666 12:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:59.666 12:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.666 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.233 00:13:00.233 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.233 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.233 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.491 { 00:13:00.491 "auth": { 00:13:00.491 "dhgroup": "ffdhe6144", 00:13:00.491 "digest": "sha256", 00:13:00.491 "state": "completed" 00:13:00.491 }, 00:13:00.491 "cntlid": 39, 00:13:00.491 "listen_address": { 00:13:00.491 "adrfam": "IPv4", 00:13:00.491 "traddr": "10.0.0.2", 00:13:00.491 "trsvcid": "4420", 00:13:00.491 "trtype": "TCP" 00:13:00.491 }, 00:13:00.491 "peer_address": { 00:13:00.491 "adrfam": "IPv4", 00:13:00.491 "traddr": "10.0.0.1", 00:13:00.491 "trsvcid": "33194", 00:13:00.491 "trtype": "TCP" 00:13:00.491 }, 00:13:00.491 "qid": 0, 00:13:00.491 "state": "enabled", 00:13:00.491 "thread": "nvmf_tgt_poll_group_000" 00:13:00.491 } 00:13:00.491 ]' 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:00.491 12:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.748 12:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.748 12:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.748 12:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.006 12:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.937 12:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.938 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.938 12:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.871 00:13:02.871 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.871 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.871 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.130 { 00:13:03.130 "auth": { 00:13:03.130 "dhgroup": "ffdhe8192", 00:13:03.130 "digest": "sha256", 00:13:03.130 "state": "completed" 00:13:03.130 }, 00:13:03.130 "cntlid": 41, 00:13:03.130 "listen_address": { 00:13:03.130 "adrfam": "IPv4", 00:13:03.130 "traddr": "10.0.0.2", 00:13:03.130 "trsvcid": "4420", 00:13:03.130 "trtype": "TCP" 00:13:03.130 }, 00:13:03.130 "peer_address": { 00:13:03.130 "adrfam": "IPv4", 00:13:03.130 "traddr": "10.0.0.1", 00:13:03.130 "trsvcid": "44534", 00:13:03.130 "trtype": "TCP" 00:13:03.130 }, 00:13:03.130 "qid": 0, 00:13:03.130 "state": "enabled", 00:13:03.130 "thread": "nvmf_tgt_poll_group_000" 00:13:03.130 } 00:13:03.130 ]' 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:03.130 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.389 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.389 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.389 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.692 12:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:13:04.273 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.274 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:04.274 12:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.274 12:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.274 12:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.274 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.274 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:04.274 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.532 12:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.465 00:13:05.465 12:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.465 12:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.465 12:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.465 12:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.465 12:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.465 12:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.465 12:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.723 12:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.723 12:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.723 { 00:13:05.723 "auth": { 00:13:05.723 "dhgroup": "ffdhe8192", 00:13:05.723 "digest": "sha256", 00:13:05.723 "state": "completed" 00:13:05.723 }, 00:13:05.723 "cntlid": 43, 00:13:05.723 "listen_address": { 00:13:05.723 "adrfam": "IPv4", 00:13:05.723 "traddr": "10.0.0.2", 00:13:05.723 "trsvcid": "4420", 00:13:05.723 "trtype": "TCP" 00:13:05.723 }, 00:13:05.723 "peer_address": { 00:13:05.723 "adrfam": "IPv4", 00:13:05.723 "traddr": "10.0.0.1", 00:13:05.723 "trsvcid": "44564", 00:13:05.723 "trtype": "TCP" 00:13:05.723 }, 00:13:05.723 "qid": 0, 00:13:05.723 "state": "enabled", 00:13:05.723 "thread": "nvmf_tgt_poll_group_000" 00:13:05.723 } 00:13:05.723 ]' 00:13:05.723 12:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.723 12:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.723 12:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.723 12:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:05.723 12:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.723 12:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.723 12:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.723 12:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.981 12:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.913 12:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.844 00:13:07.844 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.844 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.844 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.101 { 00:13:08.101 "auth": { 00:13:08.101 "dhgroup": "ffdhe8192", 00:13:08.101 "digest": "sha256", 00:13:08.101 "state": "completed" 00:13:08.101 }, 00:13:08.101 "cntlid": 45, 00:13:08.101 "listen_address": { 00:13:08.101 "adrfam": "IPv4", 00:13:08.101 "traddr": "10.0.0.2", 00:13:08.101 "trsvcid": "4420", 00:13:08.101 "trtype": "TCP" 00:13:08.101 }, 00:13:08.101 "peer_address": { 00:13:08.101 "adrfam": "IPv4", 00:13:08.101 "traddr": "10.0.0.1", 00:13:08.101 "trsvcid": "44584", 00:13:08.101 "trtype": "TCP" 00:13:08.101 }, 00:13:08.101 "qid": 0, 00:13:08.101 "state": "enabled", 00:13:08.101 "thread": "nvmf_tgt_poll_group_000" 00:13:08.101 } 00:13:08.101 ]' 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.101 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.668 12:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:13:09.235 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.235 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:09.235 12:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.235 12:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.235 12:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.235 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.235 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:09.235 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:09.493 12:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.059 00:13:10.059 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.059 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.059 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.627 { 00:13:10.627 "auth": { 00:13:10.627 "dhgroup": "ffdhe8192", 00:13:10.627 "digest": "sha256", 00:13:10.627 "state": "completed" 00:13:10.627 }, 00:13:10.627 "cntlid": 47, 00:13:10.627 "listen_address": { 00:13:10.627 "adrfam": "IPv4", 00:13:10.627 "traddr": "10.0.0.2", 00:13:10.627 "trsvcid": "4420", 00:13:10.627 "trtype": "TCP" 00:13:10.627 }, 00:13:10.627 "peer_address": { 00:13:10.627 "adrfam": "IPv4", 00:13:10.627 "traddr": "10.0.0.1", 00:13:10.627 "trsvcid": "44612", 00:13:10.627 "trtype": "TCP" 00:13:10.627 }, 00:13:10.627 "qid": 0, 00:13:10.627 "state": "enabled", 00:13:10.627 "thread": "nvmf_tgt_poll_group_000" 00:13:10.627 } 00:13:10.627 ]' 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.627 12:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.885 12:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:11.816 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.075 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.332 00:13:12.332 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.332 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.332 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.589 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.589 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.589 12:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.589 12:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.589 12:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.589 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.589 { 00:13:12.589 "auth": { 00:13:12.589 "dhgroup": "null", 00:13:12.589 "digest": "sha384", 00:13:12.589 "state": "completed" 00:13:12.589 }, 00:13:12.589 "cntlid": 49, 00:13:12.589 "listen_address": { 00:13:12.589 "adrfam": "IPv4", 00:13:12.589 "traddr": "10.0.0.2", 00:13:12.589 "trsvcid": "4420", 00:13:12.589 "trtype": "TCP" 00:13:12.589 }, 00:13:12.589 "peer_address": { 00:13:12.589 "adrfam": "IPv4", 00:13:12.589 "traddr": "10.0.0.1", 00:13:12.589 "trsvcid": "43740", 00:13:12.589 "trtype": "TCP" 00:13:12.589 }, 00:13:12.589 "qid": 0, 00:13:12.589 "state": "enabled", 00:13:12.589 "thread": "nvmf_tgt_poll_group_000" 00:13:12.589 } 00:13:12.589 ]' 00:13:12.589 12:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.589 12:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:12.589 12:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.846 12:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:12.846 12:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.846 12:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.846 12:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.846 12:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.103 12:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.037 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.604 00:13:14.604 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.604 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.604 12:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:14.862 { 00:13:14.862 "auth": { 00:13:14.862 "dhgroup": "null", 00:13:14.862 "digest": "sha384", 00:13:14.862 "state": "completed" 00:13:14.862 }, 00:13:14.862 "cntlid": 51, 00:13:14.862 "listen_address": { 00:13:14.862 "adrfam": "IPv4", 00:13:14.862 "traddr": "10.0.0.2", 00:13:14.862 "trsvcid": "4420", 00:13:14.862 "trtype": "TCP" 00:13:14.862 }, 00:13:14.862 "peer_address": { 00:13:14.862 "adrfam": "IPv4", 00:13:14.862 "traddr": "10.0.0.1", 00:13:14.862 "trsvcid": "43764", 00:13:14.862 "trtype": "TCP" 00:13:14.862 }, 00:13:14.862 "qid": 0, 00:13:14.862 "state": "enabled", 00:13:14.862 "thread": "nvmf_tgt_poll_group_000" 00:13:14.862 } 00:13:14.862 ]' 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.862 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.120 12:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:13:16.075 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.075 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:16.075 12:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.075 12:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.075 12:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.075 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.075 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:16.075 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.332 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.589 00:13:16.589 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.589 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.589 12:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.846 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.846 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.846 12:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.846 12:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.846 12:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.846 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:16.846 { 00:13:16.846 "auth": { 00:13:16.846 "dhgroup": "null", 00:13:16.846 "digest": "sha384", 00:13:16.846 "state": "completed" 00:13:16.846 }, 00:13:16.846 "cntlid": 53, 00:13:16.846 "listen_address": { 00:13:16.846 "adrfam": "IPv4", 00:13:16.846 "traddr": "10.0.0.2", 00:13:16.846 "trsvcid": "4420", 00:13:16.846 "trtype": "TCP" 00:13:16.846 }, 00:13:16.846 "peer_address": { 00:13:16.846 "adrfam": "IPv4", 00:13:16.846 "traddr": "10.0.0.1", 00:13:16.846 "trsvcid": "43796", 00:13:16.846 "trtype": "TCP" 00:13:16.846 }, 00:13:16.846 "qid": 0, 00:13:16.846 "state": "enabled", 00:13:16.846 "thread": "nvmf_tgt_poll_group_000" 00:13:16.846 } 00:13:16.846 ]' 00:13:16.846 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.104 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.104 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.104 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:17.104 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.104 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.104 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.104 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.361 12:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:13:18.294 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.294 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:18.294 12:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.294 12:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.294 12:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.294 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.294 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:18.294 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:18.552 12:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:18.809 00:13:18.809 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.809 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.809 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.066 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.066 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.066 12:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.066 12:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.066 12:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.066 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.066 { 00:13:19.066 "auth": { 00:13:19.066 "dhgroup": "null", 00:13:19.066 "digest": "sha384", 00:13:19.066 "state": "completed" 00:13:19.066 }, 00:13:19.066 "cntlid": 55, 00:13:19.066 "listen_address": { 00:13:19.066 "adrfam": "IPv4", 00:13:19.066 "traddr": "10.0.0.2", 00:13:19.066 "trsvcid": "4420", 00:13:19.066 "trtype": "TCP" 00:13:19.066 }, 00:13:19.066 "peer_address": { 00:13:19.066 "adrfam": "IPv4", 00:13:19.066 "traddr": "10.0.0.1", 00:13:19.066 "trsvcid": "43840", 00:13:19.066 "trtype": "TCP" 00:13:19.066 }, 00:13:19.066 "qid": 0, 00:13:19.066 "state": "enabled", 00:13:19.066 "thread": "nvmf_tgt_poll_group_000" 00:13:19.066 } 00:13:19.066 ]' 00:13:19.066 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.324 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.324 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.324 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:19.324 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.324 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.324 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.324 12:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.581 12:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:13:20.515 12:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.515 12:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:20.515 12:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.515 12:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.515 12:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.515 12:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.515 12:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.515 12:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:20.515 12:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.773 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.030 00:13:21.030 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.030 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.030 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.288 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.546 { 00:13:21.546 "auth": { 00:13:21.546 "dhgroup": "ffdhe2048", 00:13:21.546 "digest": "sha384", 00:13:21.546 "state": "completed" 00:13:21.546 }, 00:13:21.546 "cntlid": 57, 00:13:21.546 "listen_address": { 00:13:21.546 "adrfam": "IPv4", 00:13:21.546 "traddr": "10.0.0.2", 00:13:21.546 "trsvcid": "4420", 00:13:21.546 "trtype": "TCP" 00:13:21.546 }, 00:13:21.546 "peer_address": { 00:13:21.546 "adrfam": "IPv4", 00:13:21.546 "traddr": "10.0.0.1", 00:13:21.546 "trsvcid": "43872", 00:13:21.546 "trtype": "TCP" 00:13:21.546 }, 00:13:21.546 "qid": 0, 00:13:21.546 "state": "enabled", 00:13:21.546 "thread": "nvmf_tgt_poll_group_000" 00:13:21.546 } 00:13:21.546 ]' 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.546 12:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.829 12:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:13:22.762 12:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.762 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:22.762 12:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.762 12:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 12:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.762 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.762 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:22.762 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.022 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.280 00:13:23.280 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.280 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.280 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.539 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.539 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.539 12:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.539 12:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.539 12:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.539 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.539 { 00:13:23.539 "auth": { 00:13:23.539 "dhgroup": "ffdhe2048", 00:13:23.539 "digest": "sha384", 00:13:23.539 "state": "completed" 00:13:23.539 }, 00:13:23.539 "cntlid": 59, 00:13:23.539 "listen_address": { 00:13:23.539 "adrfam": "IPv4", 00:13:23.539 "traddr": "10.0.0.2", 00:13:23.539 "trsvcid": "4420", 00:13:23.539 "trtype": "TCP" 00:13:23.539 }, 00:13:23.539 "peer_address": { 00:13:23.539 "adrfam": "IPv4", 00:13:23.539 "traddr": "10.0.0.1", 00:13:23.539 "trsvcid": "53608", 00:13:23.539 "trtype": "TCP" 00:13:23.539 }, 00:13:23.539 "qid": 0, 00:13:23.539 "state": "enabled", 00:13:23.539 "thread": "nvmf_tgt_poll_group_000" 00:13:23.539 } 00:13:23.539 ]' 00:13:23.539 12:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.797 12:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:23.797 12:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.797 12:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:23.797 12:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.797 12:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.797 12:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.797 12:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.056 12:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:13:24.992 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.992 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:24.992 12:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.992 12:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.992 12:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.992 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:24.992 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:24.992 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.250 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.508 00:13:25.508 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.508 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:25.508 12:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.766 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.766 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.766 12:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.766 12:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.023 12:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.023 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.023 { 00:13:26.023 "auth": { 00:13:26.023 "dhgroup": "ffdhe2048", 00:13:26.023 "digest": "sha384", 00:13:26.023 "state": "completed" 00:13:26.023 }, 00:13:26.023 "cntlid": 61, 00:13:26.023 "listen_address": { 00:13:26.023 "adrfam": "IPv4", 00:13:26.023 "traddr": "10.0.0.2", 00:13:26.023 "trsvcid": "4420", 00:13:26.023 "trtype": "TCP" 00:13:26.023 }, 00:13:26.023 "peer_address": { 00:13:26.023 "adrfam": "IPv4", 00:13:26.023 "traddr": "10.0.0.1", 00:13:26.023 "trsvcid": "53638", 00:13:26.023 "trtype": "TCP" 00:13:26.023 }, 00:13:26.023 "qid": 0, 00:13:26.023 "state": "enabled", 00:13:26.024 "thread": "nvmf_tgt_poll_group_000" 00:13:26.024 } 00:13:26.024 ]' 00:13:26.024 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.024 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.024 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.024 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:26.024 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.024 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.024 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.024 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.281 12:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:13:27.212 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.212 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:27.212 12:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.212 12:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.212 12:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.212 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.212 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:27.212 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.469 12:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.773 00:13:27.773 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.773 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.773 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.353 { 00:13:28.353 "auth": { 00:13:28.353 "dhgroup": "ffdhe2048", 00:13:28.353 "digest": "sha384", 00:13:28.353 "state": "completed" 00:13:28.353 }, 00:13:28.353 "cntlid": 63, 00:13:28.353 "listen_address": { 00:13:28.353 "adrfam": "IPv4", 00:13:28.353 "traddr": "10.0.0.2", 00:13:28.353 "trsvcid": "4420", 00:13:28.353 "trtype": "TCP" 00:13:28.353 }, 00:13:28.353 "peer_address": { 00:13:28.353 "adrfam": "IPv4", 00:13:28.353 "traddr": "10.0.0.1", 00:13:28.353 "trsvcid": "53682", 00:13:28.353 "trtype": "TCP" 00:13:28.353 }, 00:13:28.353 "qid": 0, 00:13:28.353 "state": "enabled", 00:13:28.353 "thread": "nvmf_tgt_poll_group_000" 00:13:28.353 } 00:13:28.353 ]' 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.353 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.611 12:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:13:29.177 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.177 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:29.177 12:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.177 12:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.177 12:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.177 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.177 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.177 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:29.177 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.745 12:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.004 00:13:30.004 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:30.004 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.004 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:30.262 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.262 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.262 12:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.262 12:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.262 12:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.262 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.262 { 00:13:30.262 "auth": { 00:13:30.262 "dhgroup": "ffdhe3072", 00:13:30.263 "digest": "sha384", 00:13:30.263 "state": "completed" 00:13:30.263 }, 00:13:30.263 "cntlid": 65, 00:13:30.263 "listen_address": { 00:13:30.263 "adrfam": "IPv4", 00:13:30.263 "traddr": "10.0.0.2", 00:13:30.263 "trsvcid": "4420", 00:13:30.263 "trtype": "TCP" 00:13:30.263 }, 00:13:30.263 "peer_address": { 00:13:30.263 "adrfam": "IPv4", 00:13:30.263 "traddr": "10.0.0.1", 00:13:30.263 "trsvcid": "53698", 00:13:30.263 "trtype": "TCP" 00:13:30.263 }, 00:13:30.263 "qid": 0, 00:13:30.263 "state": "enabled", 00:13:30.263 "thread": "nvmf_tgt_poll_group_000" 00:13:30.263 } 00:13:30.263 ]' 00:13:30.263 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.263 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.263 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.521 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:30.521 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.521 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.521 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.521 12:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.780 12:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:13:31.370 12:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.370 12:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:31.370 12:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.370 12:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.370 12:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.370 12:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.370 12:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:31.370 12:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.628 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.195 00:13:32.195 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.195 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.195 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.454 { 00:13:32.454 "auth": { 00:13:32.454 "dhgroup": "ffdhe3072", 00:13:32.454 "digest": "sha384", 00:13:32.454 "state": "completed" 00:13:32.454 }, 00:13:32.454 "cntlid": 67, 00:13:32.454 "listen_address": { 00:13:32.454 "adrfam": "IPv4", 00:13:32.454 "traddr": "10.0.0.2", 00:13:32.454 "trsvcid": "4420", 00:13:32.454 "trtype": "TCP" 00:13:32.454 }, 00:13:32.454 "peer_address": { 00:13:32.454 "adrfam": "IPv4", 00:13:32.454 "traddr": "10.0.0.1", 00:13:32.454 "trsvcid": "33548", 00:13:32.454 "trtype": "TCP" 00:13:32.454 }, 00:13:32.454 "qid": 0, 00:13:32.454 "state": "enabled", 00:13:32.454 "thread": "nvmf_tgt_poll_group_000" 00:13:32.454 } 00:13:32.454 ]' 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:32.454 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.712 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.712 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.712 12:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.970 12:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:13:33.536 12:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.537 12:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:33.537 12:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.537 12:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.537 12:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.537 12:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.537 12:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:33.537 12:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:33.795 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:13:33.795 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.795 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:33.795 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:33.795 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:33.795 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.795 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.795 12:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.795 12:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.053 12:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.053 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.053 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.311 00:13:34.311 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.311 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.311 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.569 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.569 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.569 12:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.569 12:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.569 12:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.569 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.569 { 00:13:34.569 "auth": { 00:13:34.569 "dhgroup": "ffdhe3072", 00:13:34.569 "digest": "sha384", 00:13:34.569 "state": "completed" 00:13:34.569 }, 00:13:34.569 "cntlid": 69, 00:13:34.569 "listen_address": { 00:13:34.569 "adrfam": "IPv4", 00:13:34.569 "traddr": "10.0.0.2", 00:13:34.569 "trsvcid": "4420", 00:13:34.569 "trtype": "TCP" 00:13:34.569 }, 00:13:34.569 "peer_address": { 00:13:34.569 "adrfam": "IPv4", 00:13:34.569 "traddr": "10.0.0.1", 00:13:34.569 "trsvcid": "33580", 00:13:34.569 "trtype": "TCP" 00:13:34.569 }, 00:13:34.569 "qid": 0, 00:13:34.569 "state": "enabled", 00:13:34.569 "thread": "nvmf_tgt_poll_group_000" 00:13:34.569 } 00:13:34.569 ]' 00:13:34.569 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.569 12:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.569 12:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.828 12:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:34.828 12:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.828 12:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.828 12:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.828 12:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.086 12:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:13:36.020 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.020 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:36.020 12:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.020 12:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.020 12:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.020 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.020 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:36.020 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.278 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.279 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.536 00:13:36.536 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.536 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.536 12:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.794 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.794 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.794 12:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.794 12:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.794 12:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.794 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.794 { 00:13:36.794 "auth": { 00:13:36.794 "dhgroup": "ffdhe3072", 00:13:36.794 "digest": "sha384", 00:13:36.794 "state": "completed" 00:13:36.794 }, 00:13:36.794 "cntlid": 71, 00:13:36.794 "listen_address": { 00:13:36.794 "adrfam": "IPv4", 00:13:36.794 "traddr": "10.0.0.2", 00:13:36.794 "trsvcid": "4420", 00:13:36.794 "trtype": "TCP" 00:13:36.794 }, 00:13:36.794 "peer_address": { 00:13:36.794 "adrfam": "IPv4", 00:13:36.794 "traddr": "10.0.0.1", 00:13:36.794 "trsvcid": "33612", 00:13:36.794 "trtype": "TCP" 00:13:36.794 }, 00:13:36.794 "qid": 0, 00:13:36.794 "state": "enabled", 00:13:36.794 "thread": "nvmf_tgt_poll_group_000" 00:13:36.794 } 00:13:36.794 ]' 00:13:36.794 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.794 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:36.794 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.052 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.052 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.052 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.052 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.052 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.308 12:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:13:37.875 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.875 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:37.875 12:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.875 12:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.875 12:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.875 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.875 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.875 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:37.875 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.443 12:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.701 00:13:38.701 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.701 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.701 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.960 { 00:13:38.960 "auth": { 00:13:38.960 "dhgroup": "ffdhe4096", 00:13:38.960 "digest": "sha384", 00:13:38.960 "state": "completed" 00:13:38.960 }, 00:13:38.960 "cntlid": 73, 00:13:38.960 "listen_address": { 00:13:38.960 "adrfam": "IPv4", 00:13:38.960 "traddr": "10.0.0.2", 00:13:38.960 "trsvcid": "4420", 00:13:38.960 "trtype": "TCP" 00:13:38.960 }, 00:13:38.960 "peer_address": { 00:13:38.960 "adrfam": "IPv4", 00:13:38.960 "traddr": "10.0.0.1", 00:13:38.960 "trsvcid": "33638", 00:13:38.960 "trtype": "TCP" 00:13:38.960 }, 00:13:38.960 "qid": 0, 00:13:38.960 "state": "enabled", 00:13:38.960 "thread": "nvmf_tgt_poll_group_000" 00:13:38.960 } 00:13:38.960 ]' 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:38.960 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.219 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.219 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.219 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.477 12:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:13:40.042 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.301 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:40.301 12:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.301 12:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.301 12:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.301 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.301 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:40.301 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.560 12:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.871 00:13:40.871 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.871 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.871 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.128 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.128 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.128 12:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.128 12:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.128 12:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.128 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.128 { 00:13:41.128 "auth": { 00:13:41.128 "dhgroup": "ffdhe4096", 00:13:41.128 "digest": "sha384", 00:13:41.128 "state": "completed" 00:13:41.128 }, 00:13:41.128 "cntlid": 75, 00:13:41.128 "listen_address": { 00:13:41.128 "adrfam": "IPv4", 00:13:41.128 "traddr": "10.0.0.2", 00:13:41.128 "trsvcid": "4420", 00:13:41.128 "trtype": "TCP" 00:13:41.128 }, 00:13:41.128 "peer_address": { 00:13:41.128 "adrfam": "IPv4", 00:13:41.128 "traddr": "10.0.0.1", 00:13:41.128 "trsvcid": "33662", 00:13:41.128 "trtype": "TCP" 00:13:41.128 }, 00:13:41.128 "qid": 0, 00:13:41.128 "state": "enabled", 00:13:41.128 "thread": "nvmf_tgt_poll_group_000" 00:13:41.128 } 00:13:41.128 ]' 00:13:41.128 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.387 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.387 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.387 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:41.387 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.387 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.387 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.387 12:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.646 12:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:13:42.580 12:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.580 12:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:42.580 12:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.580 12:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.580 12:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.580 12:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.580 12:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:42.580 12:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:42.580 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.838 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.097 00:13:43.097 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.097 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.097 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.355 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.355 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.355 12:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.355 12:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.355 12:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.355 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.355 { 00:13:43.355 "auth": { 00:13:43.355 "dhgroup": "ffdhe4096", 00:13:43.355 "digest": "sha384", 00:13:43.355 "state": "completed" 00:13:43.355 }, 00:13:43.355 "cntlid": 77, 00:13:43.355 "listen_address": { 00:13:43.355 "adrfam": "IPv4", 00:13:43.355 "traddr": "10.0.0.2", 00:13:43.355 "trsvcid": "4420", 00:13:43.355 "trtype": "TCP" 00:13:43.355 }, 00:13:43.355 "peer_address": { 00:13:43.355 "adrfam": "IPv4", 00:13:43.355 "traddr": "10.0.0.1", 00:13:43.355 "trsvcid": "49832", 00:13:43.355 "trtype": "TCP" 00:13:43.355 }, 00:13:43.355 "qid": 0, 00:13:43.355 "state": "enabled", 00:13:43.355 "thread": "nvmf_tgt_poll_group_000" 00:13:43.355 } 00:13:43.355 ]' 00:13:43.355 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.613 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.613 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.613 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:43.613 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.613 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.613 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.613 12:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.871 12:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:13:44.805 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.805 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:44.805 12:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.805 12:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.805 12:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.805 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.805 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:44.805 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:45.063 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:45.063 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:45.064 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:45.323 00:13:45.323 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.323 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.323 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.582 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.582 12:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.582 12:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.582 12:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.582 12:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.582 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.582 { 00:13:45.582 "auth": { 00:13:45.582 "dhgroup": "ffdhe4096", 00:13:45.582 "digest": "sha384", 00:13:45.582 "state": "completed" 00:13:45.582 }, 00:13:45.582 "cntlid": 79, 00:13:45.582 "listen_address": { 00:13:45.582 "adrfam": "IPv4", 00:13:45.582 "traddr": "10.0.0.2", 00:13:45.582 "trsvcid": "4420", 00:13:45.582 "trtype": "TCP" 00:13:45.582 }, 00:13:45.582 "peer_address": { 00:13:45.582 "adrfam": "IPv4", 00:13:45.582 "traddr": "10.0.0.1", 00:13:45.582 "trsvcid": "49862", 00:13:45.582 "trtype": "TCP" 00:13:45.582 }, 00:13:45.582 "qid": 0, 00:13:45.582 "state": "enabled", 00:13:45.582 "thread": "nvmf_tgt_poll_group_000" 00:13:45.582 } 00:13:45.582 ]' 00:13:45.582 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.840 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.840 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.840 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:45.840 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.840 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.840 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.840 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.099 12:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:13:47.034 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.034 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:47.034 12:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.034 12:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.034 12:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.034 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.034 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.034 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:47.034 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.299 12:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.582 00:13:47.839 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.839 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.839 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.098 { 00:13:48.098 "auth": { 00:13:48.098 "dhgroup": "ffdhe6144", 00:13:48.098 "digest": "sha384", 00:13:48.098 "state": "completed" 00:13:48.098 }, 00:13:48.098 "cntlid": 81, 00:13:48.098 "listen_address": { 00:13:48.098 "adrfam": "IPv4", 00:13:48.098 "traddr": "10.0.0.2", 00:13:48.098 "trsvcid": "4420", 00:13:48.098 "trtype": "TCP" 00:13:48.098 }, 00:13:48.098 "peer_address": { 00:13:48.098 "adrfam": "IPv4", 00:13:48.098 "traddr": "10.0.0.1", 00:13:48.098 "trsvcid": "49880", 00:13:48.098 "trtype": "TCP" 00:13:48.098 }, 00:13:48.098 "qid": 0, 00:13:48.098 "state": "enabled", 00:13:48.098 "thread": "nvmf_tgt_poll_group_000" 00:13:48.098 } 00:13:48.098 ]' 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.098 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.358 12:58:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:13:49.293 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.293 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:49.293 12:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.293 12:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.293 12:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.293 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.293 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:49.293 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.550 12:58:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.114 00:13:50.114 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.114 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.114 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.372 { 00:13:50.372 "auth": { 00:13:50.372 "dhgroup": "ffdhe6144", 00:13:50.372 "digest": "sha384", 00:13:50.372 "state": "completed" 00:13:50.372 }, 00:13:50.372 "cntlid": 83, 00:13:50.372 "listen_address": { 00:13:50.372 "adrfam": "IPv4", 00:13:50.372 "traddr": "10.0.0.2", 00:13:50.372 "trsvcid": "4420", 00:13:50.372 "trtype": "TCP" 00:13:50.372 }, 00:13:50.372 "peer_address": { 00:13:50.372 "adrfam": "IPv4", 00:13:50.372 "traddr": "10.0.0.1", 00:13:50.372 "trsvcid": "49912", 00:13:50.372 "trtype": "TCP" 00:13:50.372 }, 00:13:50.372 "qid": 0, 00:13:50.372 "state": "enabled", 00:13:50.372 "thread": "nvmf_tgt_poll_group_000" 00:13:50.372 } 00:13:50.372 ]' 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.372 12:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.941 12:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:13:51.504 12:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.504 12:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:51.504 12:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.504 12:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.504 12:58:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.504 12:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.504 12:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:51.504 12:58:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.761 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.325 00:13:52.325 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.325 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.325 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.582 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.582 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.582 12:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.582 12:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.582 12:58:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.582 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.582 { 00:13:52.582 "auth": { 00:13:52.582 "dhgroup": "ffdhe6144", 00:13:52.582 "digest": "sha384", 00:13:52.582 "state": "completed" 00:13:52.582 }, 00:13:52.582 "cntlid": 85, 00:13:52.582 "listen_address": { 00:13:52.582 "adrfam": "IPv4", 00:13:52.582 "traddr": "10.0.0.2", 00:13:52.582 "trsvcid": "4420", 00:13:52.582 "trtype": "TCP" 00:13:52.582 }, 00:13:52.582 "peer_address": { 00:13:52.582 "adrfam": "IPv4", 00:13:52.582 "traddr": "10.0.0.1", 00:13:52.582 "trsvcid": "53308", 00:13:52.582 "trtype": "TCP" 00:13:52.582 }, 00:13:52.582 "qid": 0, 00:13:52.582 "state": "enabled", 00:13:52.582 "thread": "nvmf_tgt_poll_group_000" 00:13:52.582 } 00:13:52.582 ]' 00:13:52.582 12:58:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.582 12:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:52.582 12:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.840 12:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:52.840 12:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.840 12:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.840 12:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.840 12:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.097 12:58:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:13:54.027 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.027 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:54.027 12:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.027 12:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.027 12:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.027 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.027 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:54.027 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:54.285 12:58:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:54.852 00:13:54.852 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.852 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.852 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.110 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.110 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.110 12:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.110 12:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.110 12:58:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.110 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.110 { 00:13:55.110 "auth": { 00:13:55.110 "dhgroup": "ffdhe6144", 00:13:55.110 "digest": "sha384", 00:13:55.110 "state": "completed" 00:13:55.110 }, 00:13:55.110 "cntlid": 87, 00:13:55.110 "listen_address": { 00:13:55.110 "adrfam": "IPv4", 00:13:55.110 "traddr": "10.0.0.2", 00:13:55.110 "trsvcid": "4420", 00:13:55.110 "trtype": "TCP" 00:13:55.110 }, 00:13:55.110 "peer_address": { 00:13:55.110 "adrfam": "IPv4", 00:13:55.110 "traddr": "10.0.0.1", 00:13:55.110 "trsvcid": "53334", 00:13:55.110 "trtype": "TCP" 00:13:55.110 }, 00:13:55.110 "qid": 0, 00:13:55.110 "state": "enabled", 00:13:55.110 "thread": "nvmf_tgt_poll_group_000" 00:13:55.110 } 00:13:55.110 ]' 00:13:55.111 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.111 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.111 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.111 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.111 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.111 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.111 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.111 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.369 12:58:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.303 12:58:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.236 00:13:57.236 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.236 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.236 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.236 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.236 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.236 12:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.236 12:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.237 12:58:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.237 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.237 { 00:13:57.237 "auth": { 00:13:57.237 "dhgroup": "ffdhe8192", 00:13:57.237 "digest": "sha384", 00:13:57.237 "state": "completed" 00:13:57.237 }, 00:13:57.237 "cntlid": 89, 00:13:57.237 "listen_address": { 00:13:57.237 "adrfam": "IPv4", 00:13:57.237 "traddr": "10.0.0.2", 00:13:57.237 "trsvcid": "4420", 00:13:57.237 "trtype": "TCP" 00:13:57.237 }, 00:13:57.237 "peer_address": { 00:13:57.237 "adrfam": "IPv4", 00:13:57.237 "traddr": "10.0.0.1", 00:13:57.237 "trsvcid": "53352", 00:13:57.237 "trtype": "TCP" 00:13:57.237 }, 00:13:57.237 "qid": 0, 00:13:57.237 "state": "enabled", 00:13:57.237 "thread": "nvmf_tgt_poll_group_000" 00:13:57.237 } 00:13:57.237 ]' 00:13:57.237 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.237 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.237 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.495 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:57.495 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.495 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.495 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.495 12:58:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.752 12:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:13:58.685 12:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.685 12:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:13:58.685 12:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.685 12:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.685 12:58:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.685 12:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.685 12:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:58.685 12:58:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.944 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.510 00:13:59.510 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.510 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.510 12:58:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.768 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.768 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.768 12:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.025 { 00:14:00.025 "auth": { 00:14:00.025 "dhgroup": "ffdhe8192", 00:14:00.025 "digest": "sha384", 00:14:00.025 "state": "completed" 00:14:00.025 }, 00:14:00.025 "cntlid": 91, 00:14:00.025 "listen_address": { 00:14:00.025 "adrfam": "IPv4", 00:14:00.025 "traddr": "10.0.0.2", 00:14:00.025 "trsvcid": "4420", 00:14:00.025 "trtype": "TCP" 00:14:00.025 }, 00:14:00.025 "peer_address": { 00:14:00.025 "adrfam": "IPv4", 00:14:00.025 "traddr": "10.0.0.1", 00:14:00.025 "trsvcid": "53386", 00:14:00.025 "trtype": "TCP" 00:14:00.025 }, 00:14:00.025 "qid": 0, 00:14:00.025 "state": "enabled", 00:14:00.025 "thread": "nvmf_tgt_poll_group_000" 00:14:00.025 } 00:14:00.025 ]' 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.025 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.284 12:58:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.221 12:58:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.163 00:14:02.163 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.163 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.163 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.163 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.163 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.163 12:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.163 12:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.421 { 00:14:02.421 "auth": { 00:14:02.421 "dhgroup": "ffdhe8192", 00:14:02.421 "digest": "sha384", 00:14:02.421 "state": "completed" 00:14:02.421 }, 00:14:02.421 "cntlid": 93, 00:14:02.421 "listen_address": { 00:14:02.421 "adrfam": "IPv4", 00:14:02.421 "traddr": "10.0.0.2", 00:14:02.421 "trsvcid": "4420", 00:14:02.421 "trtype": "TCP" 00:14:02.421 }, 00:14:02.421 "peer_address": { 00:14:02.421 "adrfam": "IPv4", 00:14:02.421 "traddr": "10.0.0.1", 00:14:02.421 "trsvcid": "53422", 00:14:02.421 "trtype": "TCP" 00:14:02.421 }, 00:14:02.421 "qid": 0, 00:14:02.421 "state": "enabled", 00:14:02.421 "thread": "nvmf_tgt_poll_group_000" 00:14:02.421 } 00:14:02.421 ]' 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.421 12:58:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.679 12:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:14:03.610 12:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.610 12:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:03.610 12:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.610 12:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.610 12:58:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.610 12:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.610 12:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:03.610 12:58:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.868 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:04.432 00:14:04.432 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.432 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.432 12:58:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.997 { 00:14:04.997 "auth": { 00:14:04.997 "dhgroup": "ffdhe8192", 00:14:04.997 "digest": "sha384", 00:14:04.997 "state": "completed" 00:14:04.997 }, 00:14:04.997 "cntlid": 95, 00:14:04.997 "listen_address": { 00:14:04.997 "adrfam": "IPv4", 00:14:04.997 "traddr": "10.0.0.2", 00:14:04.997 "trsvcid": "4420", 00:14:04.997 "trtype": "TCP" 00:14:04.997 }, 00:14:04.997 "peer_address": { 00:14:04.997 "adrfam": "IPv4", 00:14:04.997 "traddr": "10.0.0.1", 00:14:04.997 "trsvcid": "37216", 00:14:04.997 "trtype": "TCP" 00:14:04.997 }, 00:14:04.997 "qid": 0, 00:14:04.997 "state": "enabled", 00:14:04.997 "thread": "nvmf_tgt_poll_group_000" 00:14:04.997 } 00:14:04.997 ]' 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.997 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.254 12:58:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:06.187 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.444 12:58:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.702 00:14:06.702 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.702 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.702 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.959 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.959 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.960 12:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.960 12:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.960 12:58:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.960 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.960 { 00:14:06.960 "auth": { 00:14:06.960 "dhgroup": "null", 00:14:06.960 "digest": "sha512", 00:14:06.960 "state": "completed" 00:14:06.960 }, 00:14:06.960 "cntlid": 97, 00:14:06.960 "listen_address": { 00:14:06.960 "adrfam": "IPv4", 00:14:06.960 "traddr": "10.0.0.2", 00:14:06.960 "trsvcid": "4420", 00:14:06.960 "trtype": "TCP" 00:14:06.960 }, 00:14:06.960 "peer_address": { 00:14:06.960 "adrfam": "IPv4", 00:14:06.960 "traddr": "10.0.0.1", 00:14:06.960 "trsvcid": "37250", 00:14:06.960 "trtype": "TCP" 00:14:06.960 }, 00:14:06.960 "qid": 0, 00:14:06.960 "state": "enabled", 00:14:06.960 "thread": "nvmf_tgt_poll_group_000" 00:14:06.960 } 00:14:06.960 ]' 00:14:06.960 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.217 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.217 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.217 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:07.217 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.217 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.217 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.217 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.477 12:58:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:14:08.472 12:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.472 12:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:08.472 12:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.472 12:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.472 12:58:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.472 12:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.472 12:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:08.472 12:58:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.729 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.986 00:14:08.986 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.986 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.986 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.244 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.244 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.244 12:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.244 12:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.244 12:58:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.244 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.244 { 00:14:09.244 "auth": { 00:14:09.244 "dhgroup": "null", 00:14:09.244 "digest": "sha512", 00:14:09.244 "state": "completed" 00:14:09.244 }, 00:14:09.244 "cntlid": 99, 00:14:09.244 "listen_address": { 00:14:09.244 "adrfam": "IPv4", 00:14:09.244 "traddr": "10.0.0.2", 00:14:09.244 "trsvcid": "4420", 00:14:09.244 "trtype": "TCP" 00:14:09.244 }, 00:14:09.244 "peer_address": { 00:14:09.244 "adrfam": "IPv4", 00:14:09.244 "traddr": "10.0.0.1", 00:14:09.244 "trsvcid": "37282", 00:14:09.244 "trtype": "TCP" 00:14:09.244 }, 00:14:09.244 "qid": 0, 00:14:09.244 "state": "enabled", 00:14:09.244 "thread": "nvmf_tgt_poll_group_000" 00:14:09.244 } 00:14:09.244 ]' 00:14:09.244 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.502 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:09.502 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.502 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:09.502 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.502 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.502 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.502 12:58:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.761 12:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:14:10.695 12:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.695 12:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:10.695 12:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.695 12:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.695 12:58:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.695 12:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.696 12:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:10.696 12:58:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:10.696 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:14:10.696 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.696 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:10.696 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:10.696 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:10.696 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.696 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.696 12:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.696 12:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.953 12:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.953 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.953 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.211 00:14:11.211 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.211 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.211 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.468 { 00:14:11.468 "auth": { 00:14:11.468 "dhgroup": "null", 00:14:11.468 "digest": "sha512", 00:14:11.468 "state": "completed" 00:14:11.468 }, 00:14:11.468 "cntlid": 101, 00:14:11.468 "listen_address": { 00:14:11.468 "adrfam": "IPv4", 00:14:11.468 "traddr": "10.0.0.2", 00:14:11.468 "trsvcid": "4420", 00:14:11.468 "trtype": "TCP" 00:14:11.468 }, 00:14:11.468 "peer_address": { 00:14:11.468 "adrfam": "IPv4", 00:14:11.468 "traddr": "10.0.0.1", 00:14:11.468 "trsvcid": "37310", 00:14:11.468 "trtype": "TCP" 00:14:11.468 }, 00:14:11.468 "qid": 0, 00:14:11.468 "state": "enabled", 00:14:11.468 "thread": "nvmf_tgt_poll_group_000" 00:14:11.468 } 00:14:11.468 ]' 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.468 12:58:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.726 12:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:14:12.656 12:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.656 12:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:12.656 12:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.656 12:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.656 12:58:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.656 12:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.657 12:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:12.657 12:58:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.914 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.171 00:14:13.171 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.171 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.171 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.455 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.455 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.455 12:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.455 12:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.455 12:58:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.455 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.455 { 00:14:13.455 "auth": { 00:14:13.455 "dhgroup": "null", 00:14:13.455 "digest": "sha512", 00:14:13.455 "state": "completed" 00:14:13.455 }, 00:14:13.455 "cntlid": 103, 00:14:13.455 "listen_address": { 00:14:13.455 "adrfam": "IPv4", 00:14:13.455 "traddr": "10.0.0.2", 00:14:13.455 "trsvcid": "4420", 00:14:13.455 "trtype": "TCP" 00:14:13.455 }, 00:14:13.455 "peer_address": { 00:14:13.455 "adrfam": "IPv4", 00:14:13.455 "traddr": "10.0.0.1", 00:14:13.455 "trsvcid": "59946", 00:14:13.455 "trtype": "TCP" 00:14:13.455 }, 00:14:13.455 "qid": 0, 00:14:13.455 "state": "enabled", 00:14:13.455 "thread": "nvmf_tgt_poll_group_000" 00:14:13.455 } 00:14:13.455 ]' 00:14:13.455 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.713 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.713 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.713 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:13.713 12:58:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.713 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.713 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.713 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.971 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:14:14.536 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.536 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:14.536 12:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.536 12:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.536 12:58:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.536 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.536 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.536 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:14.536 12:58:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:14.793 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:14:14.793 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.793 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:14.793 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:14.793 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:14.793 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.793 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.793 12:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.793 12:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.051 12:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.051 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.051 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.309 00:14:15.309 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.309 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.309 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.568 { 00:14:15.568 "auth": { 00:14:15.568 "dhgroup": "ffdhe2048", 00:14:15.568 "digest": "sha512", 00:14:15.568 "state": "completed" 00:14:15.568 }, 00:14:15.568 "cntlid": 105, 00:14:15.568 "listen_address": { 00:14:15.568 "adrfam": "IPv4", 00:14:15.568 "traddr": "10.0.0.2", 00:14:15.568 "trsvcid": "4420", 00:14:15.568 "trtype": "TCP" 00:14:15.568 }, 00:14:15.568 "peer_address": { 00:14:15.568 "adrfam": "IPv4", 00:14:15.568 "traddr": "10.0.0.1", 00:14:15.568 "trsvcid": "59968", 00:14:15.568 "trtype": "TCP" 00:14:15.568 }, 00:14:15.568 "qid": 0, 00:14:15.568 "state": "enabled", 00:14:15.568 "thread": "nvmf_tgt_poll_group_000" 00:14:15.568 } 00:14:15.568 ]' 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.568 12:58:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.825 12:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.825 12:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.825 12:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.083 12:58:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:14:16.650 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.650 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:16.650 12:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.650 12:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.650 12:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.650 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.650 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:16.650 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.908 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.488 00:14:17.488 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.488 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.488 12:58:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.745 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.745 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.745 12:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.745 12:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.745 12:58:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.745 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.745 { 00:14:17.745 "auth": { 00:14:17.745 "dhgroup": "ffdhe2048", 00:14:17.745 "digest": "sha512", 00:14:17.745 "state": "completed" 00:14:17.745 }, 00:14:17.745 "cntlid": 107, 00:14:17.745 "listen_address": { 00:14:17.745 "adrfam": "IPv4", 00:14:17.745 "traddr": "10.0.0.2", 00:14:17.745 "trsvcid": "4420", 00:14:17.745 "trtype": "TCP" 00:14:17.745 }, 00:14:17.745 "peer_address": { 00:14:17.745 "adrfam": "IPv4", 00:14:17.745 "traddr": "10.0.0.1", 00:14:17.745 "trsvcid": "60002", 00:14:17.745 "trtype": "TCP" 00:14:17.745 }, 00:14:17.745 "qid": 0, 00:14:17.745 "state": "enabled", 00:14:17.745 "thread": "nvmf_tgt_poll_group_000" 00:14:17.745 } 00:14:17.745 ]' 00:14:17.745 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.745 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:17.746 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.746 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:17.746 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.746 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.746 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.746 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.312 12:58:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:14:18.878 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.878 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:18.878 12:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.878 12:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.878 12:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.878 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.878 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:18.878 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.137 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.718 00:14:19.718 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.718 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.718 12:58:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.978 { 00:14:19.978 "auth": { 00:14:19.978 "dhgroup": "ffdhe2048", 00:14:19.978 "digest": "sha512", 00:14:19.978 "state": "completed" 00:14:19.978 }, 00:14:19.978 "cntlid": 109, 00:14:19.978 "listen_address": { 00:14:19.978 "adrfam": "IPv4", 00:14:19.978 "traddr": "10.0.0.2", 00:14:19.978 "trsvcid": "4420", 00:14:19.978 "trtype": "TCP" 00:14:19.978 }, 00:14:19.978 "peer_address": { 00:14:19.978 "adrfam": "IPv4", 00:14:19.978 "traddr": "10.0.0.1", 00:14:19.978 "trsvcid": "60032", 00:14:19.978 "trtype": "TCP" 00:14:19.978 }, 00:14:19.978 "qid": 0, 00:14:19.978 "state": "enabled", 00:14:19.978 "thread": "nvmf_tgt_poll_group_000" 00:14:19.978 } 00:14:19.978 ]' 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.978 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.237 12:58:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:14:21.172 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.172 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:21.172 12:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.172 12:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.172 12:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.172 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.172 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:21.172 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.430 12:58:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.688 00:14:21.688 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.688 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.688 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.946 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.946 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.946 12:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.946 12:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.946 12:58:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.946 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.946 { 00:14:21.946 "auth": { 00:14:21.946 "dhgroup": "ffdhe2048", 00:14:21.946 "digest": "sha512", 00:14:21.946 "state": "completed" 00:14:21.946 }, 00:14:21.946 "cntlid": 111, 00:14:21.946 "listen_address": { 00:14:21.946 "adrfam": "IPv4", 00:14:21.946 "traddr": "10.0.0.2", 00:14:21.946 "trsvcid": "4420", 00:14:21.946 "trtype": "TCP" 00:14:21.946 }, 00:14:21.946 "peer_address": { 00:14:21.946 "adrfam": "IPv4", 00:14:21.947 "traddr": "10.0.0.1", 00:14:21.947 "trsvcid": "60042", 00:14:21.947 "trtype": "TCP" 00:14:21.947 }, 00:14:21.947 "qid": 0, 00:14:21.947 "state": "enabled", 00:14:21.947 "thread": "nvmf_tgt_poll_group_000" 00:14:21.947 } 00:14:21.947 ]' 00:14:21.947 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.947 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.947 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.204 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.204 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.204 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.204 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.204 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.461 12:58:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:14:23.421 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.421 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:23.421 12:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.421 12:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.421 12:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.421 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.421 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.421 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:23.421 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.678 12:58:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.936 00:14:23.936 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.936 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.936 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.194 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.194 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.194 12:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.194 12:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.194 12:58:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.194 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.194 { 00:14:24.194 "auth": { 00:14:24.194 "dhgroup": "ffdhe3072", 00:14:24.194 "digest": "sha512", 00:14:24.194 "state": "completed" 00:14:24.194 }, 00:14:24.194 "cntlid": 113, 00:14:24.194 "listen_address": { 00:14:24.194 "adrfam": "IPv4", 00:14:24.194 "traddr": "10.0.0.2", 00:14:24.194 "trsvcid": "4420", 00:14:24.194 "trtype": "TCP" 00:14:24.194 }, 00:14:24.194 "peer_address": { 00:14:24.194 "adrfam": "IPv4", 00:14:24.194 "traddr": "10.0.0.1", 00:14:24.194 "trsvcid": "53830", 00:14:24.194 "trtype": "TCP" 00:14:24.194 }, 00:14:24.194 "qid": 0, 00:14:24.194 "state": "enabled", 00:14:24.194 "thread": "nvmf_tgt_poll_group_000" 00:14:24.194 } 00:14:24.194 ]' 00:14:24.194 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.194 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.194 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.452 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:24.452 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.452 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.452 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.452 12:58:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.709 12:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:14:25.675 12:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.675 12:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:25.675 12:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.675 12:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.675 12:58:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.675 12:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.675 12:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:25.675 12:58:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.939 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.197 00:14:26.197 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.197 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.197 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.455 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.455 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.455 12:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.455 12:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.455 12:58:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.455 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.455 { 00:14:26.455 "auth": { 00:14:26.455 "dhgroup": "ffdhe3072", 00:14:26.455 "digest": "sha512", 00:14:26.455 "state": "completed" 00:14:26.455 }, 00:14:26.455 "cntlid": 115, 00:14:26.455 "listen_address": { 00:14:26.455 "adrfam": "IPv4", 00:14:26.455 "traddr": "10.0.0.2", 00:14:26.455 "trsvcid": "4420", 00:14:26.455 "trtype": "TCP" 00:14:26.455 }, 00:14:26.455 "peer_address": { 00:14:26.455 "adrfam": "IPv4", 00:14:26.455 "traddr": "10.0.0.1", 00:14:26.455 "trsvcid": "53854", 00:14:26.455 "trtype": "TCP" 00:14:26.455 }, 00:14:26.455 "qid": 0, 00:14:26.455 "state": "enabled", 00:14:26.455 "thread": "nvmf_tgt_poll_group_000" 00:14:26.455 } 00:14:26.455 ]' 00:14:26.455 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.455 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:26.455 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.712 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:26.712 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.712 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.712 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.712 12:58:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.971 12:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:14:27.538 12:58:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.538 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:27.538 12:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.538 12:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.796 12:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.796 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.796 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:27.796 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:28.054 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:14:28.054 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.055 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.312 00:14:28.312 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.312 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.312 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.570 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.570 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.570 12:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.570 12:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.570 12:58:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.570 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.570 { 00:14:28.570 "auth": { 00:14:28.570 "dhgroup": "ffdhe3072", 00:14:28.570 "digest": "sha512", 00:14:28.570 "state": "completed" 00:14:28.570 }, 00:14:28.570 "cntlid": 117, 00:14:28.570 "listen_address": { 00:14:28.570 "adrfam": "IPv4", 00:14:28.570 "traddr": "10.0.0.2", 00:14:28.570 "trsvcid": "4420", 00:14:28.570 "trtype": "TCP" 00:14:28.570 }, 00:14:28.570 "peer_address": { 00:14:28.570 "adrfam": "IPv4", 00:14:28.570 "traddr": "10.0.0.1", 00:14:28.570 "trsvcid": "53892", 00:14:28.570 "trtype": "TCP" 00:14:28.570 }, 00:14:28.570 "qid": 0, 00:14:28.570 "state": "enabled", 00:14:28.570 "thread": "nvmf_tgt_poll_group_000" 00:14:28.570 } 00:14:28.570 ]' 00:14:28.570 12:58:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.828 12:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:28.828 12:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.828 12:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.828 12:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.828 12:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.828 12:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.828 12:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.086 12:58:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:14:30.020 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.020 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:30.020 12:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.020 12:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.020 12:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.020 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.020 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:30.020 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:30.278 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:14:30.278 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.278 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:30.278 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:30.278 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:30.279 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.279 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:14:30.279 12:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.279 12:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.279 12:58:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.279 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.279 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.537 00:14:30.537 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.537 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.537 12:58:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.796 { 00:14:30.796 "auth": { 00:14:30.796 "dhgroup": "ffdhe3072", 00:14:30.796 "digest": "sha512", 00:14:30.796 "state": "completed" 00:14:30.796 }, 00:14:30.796 "cntlid": 119, 00:14:30.796 "listen_address": { 00:14:30.796 "adrfam": "IPv4", 00:14:30.796 "traddr": "10.0.0.2", 00:14:30.796 "trsvcid": "4420", 00:14:30.796 "trtype": "TCP" 00:14:30.796 }, 00:14:30.796 "peer_address": { 00:14:30.796 "adrfam": "IPv4", 00:14:30.796 "traddr": "10.0.0.1", 00:14:30.796 "trsvcid": "53924", 00:14:30.796 "trtype": "TCP" 00:14:30.796 }, 00:14:30.796 "qid": 0, 00:14:30.796 "state": "enabled", 00:14:30.796 "thread": "nvmf_tgt_poll_group_000" 00:14:30.796 } 00:14:30.796 ]' 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.796 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.054 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.054 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.054 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.054 12:58:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:14:31.986 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.986 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:31.986 12:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.986 12:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.986 12:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.987 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.987 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.987 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:31.987 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:32.243 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:14:32.243 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.243 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:32.243 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:32.243 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:32.243 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.243 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.243 12:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.243 12:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.244 12:58:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.244 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.244 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.500 00:14:32.500 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.500 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.500 12:58:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.064 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.065 { 00:14:33.065 "auth": { 00:14:33.065 "dhgroup": "ffdhe4096", 00:14:33.065 "digest": "sha512", 00:14:33.065 "state": "completed" 00:14:33.065 }, 00:14:33.065 "cntlid": 121, 00:14:33.065 "listen_address": { 00:14:33.065 "adrfam": "IPv4", 00:14:33.065 "traddr": "10.0.0.2", 00:14:33.065 "trsvcid": "4420", 00:14:33.065 "trtype": "TCP" 00:14:33.065 }, 00:14:33.065 "peer_address": { 00:14:33.065 "adrfam": "IPv4", 00:14:33.065 "traddr": "10.0.0.1", 00:14:33.065 "trsvcid": "38708", 00:14:33.065 "trtype": "TCP" 00:14:33.065 }, 00:14:33.065 "qid": 0, 00:14:33.065 "state": "enabled", 00:14:33.065 "thread": "nvmf_tgt_poll_group_000" 00:14:33.065 } 00:14:33.065 ]' 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.065 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.322 12:58:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:14:34.254 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.254 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:34.254 12:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.254 12:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.254 12:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.254 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.254 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:34.254 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.511 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.512 12:58:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.769 00:14:34.769 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.769 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.769 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.028 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.028 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.028 12:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.028 12:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.028 12:58:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.028 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.028 { 00:14:35.028 "auth": { 00:14:35.028 "dhgroup": "ffdhe4096", 00:14:35.028 "digest": "sha512", 00:14:35.028 "state": "completed" 00:14:35.028 }, 00:14:35.028 "cntlid": 123, 00:14:35.028 "listen_address": { 00:14:35.029 "adrfam": "IPv4", 00:14:35.029 "traddr": "10.0.0.2", 00:14:35.029 "trsvcid": "4420", 00:14:35.029 "trtype": "TCP" 00:14:35.029 }, 00:14:35.029 "peer_address": { 00:14:35.029 "adrfam": "IPv4", 00:14:35.029 "traddr": "10.0.0.1", 00:14:35.029 "trsvcid": "38728", 00:14:35.029 "trtype": "TCP" 00:14:35.029 }, 00:14:35.029 "qid": 0, 00:14:35.029 "state": "enabled", 00:14:35.029 "thread": "nvmf_tgt_poll_group_000" 00:14:35.029 } 00:14:35.029 ]' 00:14:35.029 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.029 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.029 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.287 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:35.287 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.287 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.287 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.287 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.545 12:58:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:14:36.126 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.126 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:36.126 12:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.126 12:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.126 12:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.126 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.126 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:36.126 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.692 12:58:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.950 00:14:36.950 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.950 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.950 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.208 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.208 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.208 12:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.208 12:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.208 12:58:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.208 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.208 { 00:14:37.208 "auth": { 00:14:37.208 "dhgroup": "ffdhe4096", 00:14:37.208 "digest": "sha512", 00:14:37.208 "state": "completed" 00:14:37.208 }, 00:14:37.208 "cntlid": 125, 00:14:37.208 "listen_address": { 00:14:37.208 "adrfam": "IPv4", 00:14:37.208 "traddr": "10.0.0.2", 00:14:37.208 "trsvcid": "4420", 00:14:37.208 "trtype": "TCP" 00:14:37.208 }, 00:14:37.208 "peer_address": { 00:14:37.208 "adrfam": "IPv4", 00:14:37.208 "traddr": "10.0.0.1", 00:14:37.208 "trsvcid": "38748", 00:14:37.208 "trtype": "TCP" 00:14:37.208 }, 00:14:37.208 "qid": 0, 00:14:37.208 "state": "enabled", 00:14:37.208 "thread": "nvmf_tgt_poll_group_000" 00:14:37.208 } 00:14:37.208 ]' 00:14:37.208 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.466 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.466 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.466 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:37.466 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.466 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.466 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.466 12:58:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.724 12:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:14:38.658 12:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.658 12:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:38.658 12:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.658 12:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.658 12:58:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.658 12:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.658 12:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:38.658 12:58:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.916 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.173 00:14:39.431 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.431 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.431 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.689 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.689 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.689 12:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.689 12:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.689 12:58:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.689 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.689 { 00:14:39.689 "auth": { 00:14:39.689 "dhgroup": "ffdhe4096", 00:14:39.689 "digest": "sha512", 00:14:39.689 "state": "completed" 00:14:39.689 }, 00:14:39.689 "cntlid": 127, 00:14:39.689 "listen_address": { 00:14:39.689 "adrfam": "IPv4", 00:14:39.689 "traddr": "10.0.0.2", 00:14:39.689 "trsvcid": "4420", 00:14:39.689 "trtype": "TCP" 00:14:39.689 }, 00:14:39.689 "peer_address": { 00:14:39.689 "adrfam": "IPv4", 00:14:39.689 "traddr": "10.0.0.1", 00:14:39.689 "trsvcid": "38786", 00:14:39.689 "trtype": "TCP" 00:14:39.689 }, 00:14:39.689 "qid": 0, 00:14:39.689 "state": "enabled", 00:14:39.689 "thread": "nvmf_tgt_poll_group_000" 00:14:39.689 } 00:14:39.689 ]' 00:14:39.689 12:58:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.689 12:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:39.689 12:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.689 12:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.689 12:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.689 12:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.689 12:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.689 12:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.283 12:58:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:14:40.852 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.852 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:40.852 12:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.852 12:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.852 12:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.852 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.852 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.852 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:40.852 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.109 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.670 00:14:41.670 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.670 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.670 12:58:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.926 { 00:14:41.926 "auth": { 00:14:41.926 "dhgroup": "ffdhe6144", 00:14:41.926 "digest": "sha512", 00:14:41.926 "state": "completed" 00:14:41.926 }, 00:14:41.926 "cntlid": 129, 00:14:41.926 "listen_address": { 00:14:41.926 "adrfam": "IPv4", 00:14:41.926 "traddr": "10.0.0.2", 00:14:41.926 "trsvcid": "4420", 00:14:41.926 "trtype": "TCP" 00:14:41.926 }, 00:14:41.926 "peer_address": { 00:14:41.926 "adrfam": "IPv4", 00:14:41.926 "traddr": "10.0.0.1", 00:14:41.926 "trsvcid": "38816", 00:14:41.926 "trtype": "TCP" 00:14:41.926 }, 00:14:41.926 "qid": 0, 00:14:41.926 "state": "enabled", 00:14:41.926 "thread": "nvmf_tgt_poll_group_000" 00:14:41.926 } 00:14:41.926 ]' 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.926 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.182 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.182 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.182 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.439 12:58:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:14:43.003 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.003 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:43.003 12:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.003 12:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.003 12:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.003 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.003 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:43.003 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.260 12:58:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.825 00:14:43.825 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.825 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.825 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.082 { 00:14:44.082 "auth": { 00:14:44.082 "dhgroup": "ffdhe6144", 00:14:44.082 "digest": "sha512", 00:14:44.082 "state": "completed" 00:14:44.082 }, 00:14:44.082 "cntlid": 131, 00:14:44.082 "listen_address": { 00:14:44.082 "adrfam": "IPv4", 00:14:44.082 "traddr": "10.0.0.2", 00:14:44.082 "trsvcid": "4420", 00:14:44.082 "trtype": "TCP" 00:14:44.082 }, 00:14:44.082 "peer_address": { 00:14:44.082 "adrfam": "IPv4", 00:14:44.082 "traddr": "10.0.0.1", 00:14:44.082 "trsvcid": "35172", 00:14:44.082 "trtype": "TCP" 00:14:44.082 }, 00:14:44.082 "qid": 0, 00:14:44.082 "state": "enabled", 00:14:44.082 "thread": "nvmf_tgt_poll_group_000" 00:14:44.082 } 00:14:44.082 ]' 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.082 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.375 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.375 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.375 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.631 12:58:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:14:45.195 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.195 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:45.195 12:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.195 12:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.195 12:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.195 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.195 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:45.195 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.759 12:58:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.015 00:14:46.015 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.015 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.015 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.579 { 00:14:46.579 "auth": { 00:14:46.579 "dhgroup": "ffdhe6144", 00:14:46.579 "digest": "sha512", 00:14:46.579 "state": "completed" 00:14:46.579 }, 00:14:46.579 "cntlid": 133, 00:14:46.579 "listen_address": { 00:14:46.579 "adrfam": "IPv4", 00:14:46.579 "traddr": "10.0.0.2", 00:14:46.579 "trsvcid": "4420", 00:14:46.579 "trtype": "TCP" 00:14:46.579 }, 00:14:46.579 "peer_address": { 00:14:46.579 "adrfam": "IPv4", 00:14:46.579 "traddr": "10.0.0.1", 00:14:46.579 "trsvcid": "35198", 00:14:46.579 "trtype": "TCP" 00:14:46.579 }, 00:14:46.579 "qid": 0, 00:14:46.579 "state": "enabled", 00:14:46.579 "thread": "nvmf_tgt_poll_group_000" 00:14:46.579 } 00:14:46.579 ]' 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.579 12:58:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.834 12:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:14:47.764 12:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.764 12:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:47.764 12:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.764 12:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.764 12:58:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.764 12:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.764 12:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:47.764 12:58:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.021 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.474 00:14:48.474 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.474 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.474 12:59:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.732 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.732 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.732 12:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.732 12:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.732 12:59:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.732 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.732 { 00:14:48.732 "auth": { 00:14:48.732 "dhgroup": "ffdhe6144", 00:14:48.732 "digest": "sha512", 00:14:48.732 "state": "completed" 00:14:48.732 }, 00:14:48.732 "cntlid": 135, 00:14:48.732 "listen_address": { 00:14:48.732 "adrfam": "IPv4", 00:14:48.732 "traddr": "10.0.0.2", 00:14:48.732 "trsvcid": "4420", 00:14:48.732 "trtype": "TCP" 00:14:48.732 }, 00:14:48.732 "peer_address": { 00:14:48.732 "adrfam": "IPv4", 00:14:48.732 "traddr": "10.0.0.1", 00:14:48.732 "trsvcid": "35222", 00:14:48.732 "trtype": "TCP" 00:14:48.732 }, 00:14:48.733 "qid": 0, 00:14:48.733 "state": "enabled", 00:14:48.733 "thread": "nvmf_tgt_poll_group_000" 00:14:48.733 } 00:14:48.733 ]' 00:14:48.733 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.733 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.733 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.733 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:48.733 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.988 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.988 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.989 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.244 12:59:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:14:49.809 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.809 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:49.809 12:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.809 12:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.809 12:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.809 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:49.809 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.809 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:49.809 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.375 12:59:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.936 00:14:50.936 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.936 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.936 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.193 { 00:14:51.193 "auth": { 00:14:51.193 "dhgroup": "ffdhe8192", 00:14:51.193 "digest": "sha512", 00:14:51.193 "state": "completed" 00:14:51.193 }, 00:14:51.193 "cntlid": 137, 00:14:51.193 "listen_address": { 00:14:51.193 "adrfam": "IPv4", 00:14:51.193 "traddr": "10.0.0.2", 00:14:51.193 "trsvcid": "4420", 00:14:51.193 "trtype": "TCP" 00:14:51.193 }, 00:14:51.193 "peer_address": { 00:14:51.193 "adrfam": "IPv4", 00:14:51.193 "traddr": "10.0.0.1", 00:14:51.193 "trsvcid": "35248", 00:14:51.193 "trtype": "TCP" 00:14:51.193 }, 00:14:51.193 "qid": 0, 00:14:51.193 "state": "enabled", 00:14:51.193 "thread": "nvmf_tgt_poll_group_000" 00:14:51.193 } 00:14:51.193 ]' 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.193 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.450 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.450 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.450 12:59:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.708 12:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:14:52.640 12:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.640 12:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:52.640 12:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.640 12:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.640 12:59:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.640 12:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.640 12:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:52.640 12:59:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.899 12:59:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.833 00:14:53.833 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.833 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.833 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.091 { 00:14:54.091 "auth": { 00:14:54.091 "dhgroup": "ffdhe8192", 00:14:54.091 "digest": "sha512", 00:14:54.091 "state": "completed" 00:14:54.091 }, 00:14:54.091 "cntlid": 139, 00:14:54.091 "listen_address": { 00:14:54.091 "adrfam": "IPv4", 00:14:54.091 "traddr": "10.0.0.2", 00:14:54.091 "trsvcid": "4420", 00:14:54.091 "trtype": "TCP" 00:14:54.091 }, 00:14:54.091 "peer_address": { 00:14:54.091 "adrfam": "IPv4", 00:14:54.091 "traddr": "10.0.0.1", 00:14:54.091 "trsvcid": "34700", 00:14:54.091 "trtype": "TCP" 00:14:54.091 }, 00:14:54.091 "qid": 0, 00:14:54.091 "state": "enabled", 00:14:54.091 "thread": "nvmf_tgt_poll_group_000" 00:14:54.091 } 00:14:54.091 ]' 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.091 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.350 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.350 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.350 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.608 12:59:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:NmUzYTQ4Y2FlMjRlODliMzU3MDYwOGM4NGM3ZjE1ZTTxFkgX: --dhchap-ctrl-secret DHHC-1:02:MDdmNDgwMzFmYTgxMzZiMmIzN2Q2NmM3NGUwNGZhODJjMmNlNTc0MGJmOWQ4ZWFjZ7I21g==: 00:14:55.173 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.173 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:55.173 12:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.173 12:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.173 12:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.173 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.173 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:55.173 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.740 12:59:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.307 00:14:56.307 12:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.307 12:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.307 12:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.565 12:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.565 12:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.565 12:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.565 12:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.565 12:59:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.565 12:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.565 { 00:14:56.565 "auth": { 00:14:56.565 "dhgroup": "ffdhe8192", 00:14:56.565 "digest": "sha512", 00:14:56.565 "state": "completed" 00:14:56.565 }, 00:14:56.565 "cntlid": 141, 00:14:56.565 "listen_address": { 00:14:56.565 "adrfam": "IPv4", 00:14:56.565 "traddr": "10.0.0.2", 00:14:56.565 "trsvcid": "4420", 00:14:56.565 "trtype": "TCP" 00:14:56.565 }, 00:14:56.565 "peer_address": { 00:14:56.565 "adrfam": "IPv4", 00:14:56.565 "traddr": "10.0.0.1", 00:14:56.565 "trsvcid": "34720", 00:14:56.565 "trtype": "TCP" 00:14:56.565 }, 00:14:56.565 "qid": 0, 00:14:56.565 "state": "enabled", 00:14:56.565 "thread": "nvmf_tgt_poll_group_000" 00:14:56.565 } 00:14:56.565 ]' 00:14:56.565 12:59:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.823 12:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.823 12:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.823 12:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:56.823 12:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.823 12:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.823 12:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.823 12:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.080 12:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:OGJjYjg0OTBiZjA2MGM0NmU5ZDNkODg5MTgxMmU2ZDUzNTcwNmRiNzA5NDdkOTU2YZ7Jrg==: --dhchap-ctrl-secret DHHC-1:01:ODdmN2YzMGZjZjg0M2RkZjUyMTJjZjYwMjFhNzkzNTTTCXZ0: 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:14:58.013 12:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.014 12:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.271 12:59:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.271 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.271 12:59:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.837 00:14:59.095 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.095 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.095 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.352 { 00:14:59.352 "auth": { 00:14:59.352 "dhgroup": "ffdhe8192", 00:14:59.352 "digest": "sha512", 00:14:59.352 "state": "completed" 00:14:59.352 }, 00:14:59.352 "cntlid": 143, 00:14:59.352 "listen_address": { 00:14:59.352 "adrfam": "IPv4", 00:14:59.352 "traddr": "10.0.0.2", 00:14:59.352 "trsvcid": "4420", 00:14:59.352 "trtype": "TCP" 00:14:59.352 }, 00:14:59.352 "peer_address": { 00:14:59.352 "adrfam": "IPv4", 00:14:59.352 "traddr": "10.0.0.1", 00:14:59.352 "trsvcid": "34732", 00:14:59.352 "trtype": "TCP" 00:14:59.352 }, 00:14:59.352 "qid": 0, 00:14:59.352 "state": "enabled", 00:14:59.352 "thread": "nvmf_tgt_poll_group_000" 00:14:59.352 } 00:14:59.352 ]' 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.352 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.353 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.610 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.610 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.610 12:59:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.610 12:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:15:00.543 12:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.543 12:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:00.543 12:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.543 12:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.543 12:59:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.543 12:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:00.543 12:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:15:00.543 12:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:00.544 12:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:00.544 12:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:00.544 12:59:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.802 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.368 00:15:01.368 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.368 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.368 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.627 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.627 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.627 12:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.627 12:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.627 12:59:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.627 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.627 { 00:15:01.627 "auth": { 00:15:01.627 "dhgroup": "ffdhe8192", 00:15:01.627 "digest": "sha512", 00:15:01.627 "state": "completed" 00:15:01.627 }, 00:15:01.627 "cntlid": 145, 00:15:01.627 "listen_address": { 00:15:01.627 "adrfam": "IPv4", 00:15:01.627 "traddr": "10.0.0.2", 00:15:01.627 "trsvcid": "4420", 00:15:01.627 "trtype": "TCP" 00:15:01.627 }, 00:15:01.627 "peer_address": { 00:15:01.627 "adrfam": "IPv4", 00:15:01.627 "traddr": "10.0.0.1", 00:15:01.627 "trsvcid": "34760", 00:15:01.627 "trtype": "TCP" 00:15:01.627 }, 00:15:01.627 "qid": 0, 00:15:01.627 "state": "enabled", 00:15:01.627 "thread": "nvmf_tgt_poll_group_000" 00:15:01.627 } 00:15:01.627 ]' 00:15:01.627 12:59:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.627 12:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.627 12:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.627 12:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:01.885 12:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.885 12:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.885 12:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.885 12:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.143 12:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:OWQwNDYzNDU0NjVkOWJlNTIzYWZkZDcwNTkzYjA1ZWUxODQ1MzAzZmUxZDA5M2Yy0hULug==: --dhchap-ctrl-secret DHHC-1:03:MmE2OWY4MzEyNjZlODhkMTdmYjA5ZDYzMWU0OWY3ZTM0MTVjODNmYjFhY2ZkNjk4YjRkN2NhZTMzYWQ0ODMwOTa4inc=: 00:15:02.709 12:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:02.968 12:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:03.536 2024/07/15 12:59:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:03.536 request: 00:15:03.536 { 00:15:03.536 "method": "bdev_nvme_attach_controller", 00:15:03.536 "params": { 00:15:03.536 "name": "nvme0", 00:15:03.536 "trtype": "tcp", 00:15:03.536 "traddr": "10.0.0.2", 00:15:03.536 "adrfam": "ipv4", 00:15:03.536 "trsvcid": "4420", 00:15:03.536 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:03.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:15:03.536 "prchk_reftag": false, 00:15:03.536 "prchk_guard": false, 00:15:03.536 "hdgst": false, 00:15:03.536 "ddgst": false, 00:15:03.536 "dhchap_key": "key2" 00:15:03.536 } 00:15:03.536 } 00:15:03.536 Got JSON-RPC error response 00:15:03.536 GoRPCClient: error on JSON-RPC call 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:03.536 12:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.104 2024/07/15 12:59:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:04.104 request: 00:15:04.104 { 00:15:04.104 "method": "bdev_nvme_attach_controller", 00:15:04.104 "params": { 00:15:04.104 "name": "nvme0", 00:15:04.104 "trtype": "tcp", 00:15:04.104 "traddr": "10.0.0.2", 00:15:04.104 "adrfam": "ipv4", 00:15:04.104 "trsvcid": "4420", 00:15:04.104 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:04.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:15:04.104 "prchk_reftag": false, 00:15:04.104 "prchk_guard": false, 00:15:04.104 "hdgst": false, 00:15:04.104 "ddgst": false, 00:15:04.104 "dhchap_key": "key1", 00:15:04.104 "dhchap_ctrlr_key": "ckey2" 00:15:04.104 } 00:15:04.104 } 00:15:04.104 Got JSON-RPC error response 00:15:04.104 GoRPCClient: error on JSON-RPC call 00:15:04.104 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:04.104 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.104 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.104 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.105 12:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.672 2024/07/15 12:59:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:04.672 request: 00:15:04.672 { 00:15:04.672 "method": "bdev_nvme_attach_controller", 00:15:04.672 "params": { 00:15:04.672 "name": "nvme0", 00:15:04.672 "trtype": "tcp", 00:15:04.672 "traddr": "10.0.0.2", 00:15:04.672 "adrfam": "ipv4", 00:15:04.672 "trsvcid": "4420", 00:15:04.672 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:04.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:15:04.672 "prchk_reftag": false, 00:15:04.672 "prchk_guard": false, 00:15:04.672 "hdgst": false, 00:15:04.672 "ddgst": false, 00:15:04.672 "dhchap_key": "key1", 00:15:04.672 "dhchap_ctrlr_key": "ckey1" 00:15:04.672 } 00:15:04.672 } 00:15:04.672 Got JSON-RPC error response 00:15:04.672 GoRPCClient: error on JSON-RPC call 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77890 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77890 ']' 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77890 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77890 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:04.672 killing process with pid 77890 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77890' 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77890 00:15:04.672 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77890 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@485 -- # nvmfpid=82876 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@486 -- # waitforlisten 82876 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82876 ']' 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.931 12:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82876 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82876 ']' 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.305 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.306 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.306 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.306 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.306 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.306 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:06.306 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:15:06.306 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.306 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.564 12:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.180 00:15:07.180 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.180 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.180 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.437 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.437 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.437 12:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.437 12:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.437 12:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.437 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.437 { 00:15:07.437 "auth": { 00:15:07.437 "dhgroup": "ffdhe8192", 00:15:07.437 "digest": "sha512", 00:15:07.437 "state": "completed" 00:15:07.437 }, 00:15:07.437 "cntlid": 1, 00:15:07.437 "listen_address": { 00:15:07.437 "adrfam": "IPv4", 00:15:07.437 "traddr": "10.0.0.2", 00:15:07.437 "trsvcid": "4420", 00:15:07.437 "trtype": "TCP" 00:15:07.437 }, 00:15:07.437 "peer_address": { 00:15:07.437 "adrfam": "IPv4", 00:15:07.437 "traddr": "10.0.0.1", 00:15:07.437 "trsvcid": "57100", 00:15:07.437 "trtype": "TCP" 00:15:07.437 }, 00:15:07.437 "qid": 0, 00:15:07.437 "state": "enabled", 00:15:07.437 "thread": "nvmf_tgt_poll_group_000" 00:15:07.437 } 00:15:07.437 ]' 00:15:07.437 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.437 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.696 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.696 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:07.696 12:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.696 12:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.696 12:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.696 12:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.954 12:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:YzIwNjg0NDZkYjYzYjhkYTAzNzlkNGZmM2ZjY2ZiNzQ5MjBhYzlhNWVmOTkxMjU4MmExOWEzZWIzMWEzM2UyMBuxOF0=: 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:08.887 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:09.144 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.144 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:09.144 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.144 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:09.144 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.144 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:09.144 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.144 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.144 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.402 2024/07/15 12:59:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:09.402 request: 00:15:09.402 { 00:15:09.402 "method": "bdev_nvme_attach_controller", 00:15:09.402 "params": { 00:15:09.402 "name": "nvme0", 00:15:09.402 "trtype": "tcp", 00:15:09.402 "traddr": "10.0.0.2", 00:15:09.402 "adrfam": "ipv4", 00:15:09.402 "trsvcid": "4420", 00:15:09.402 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:09.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:15:09.402 "prchk_reftag": false, 00:15:09.402 "prchk_guard": false, 00:15:09.402 "hdgst": false, 00:15:09.402 "ddgst": false, 00:15:09.402 "dhchap_key": "key3" 00:15:09.402 } 00:15:09.402 } 00:15:09.402 Got JSON-RPC error response 00:15:09.402 GoRPCClient: error on JSON-RPC call 00:15:09.402 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:09.402 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.402 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:09.402 12:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.402 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:15:09.402 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:15:09.402 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:09.402 12:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:09.660 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.660 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:09.660 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.660 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:09.660 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.660 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:09.660 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.660 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.660 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.916 2024/07/15 12:59:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:09.916 request: 00:15:09.916 { 00:15:09.916 "method": "bdev_nvme_attach_controller", 00:15:09.916 "params": { 00:15:09.916 "name": "nvme0", 00:15:09.916 "trtype": "tcp", 00:15:09.916 "traddr": "10.0.0.2", 00:15:09.916 "adrfam": "ipv4", 00:15:09.916 "trsvcid": "4420", 00:15:09.916 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:09.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:15:09.916 "prchk_reftag": false, 00:15:09.916 "prchk_guard": false, 00:15:09.916 "hdgst": false, 00:15:09.916 "ddgst": false, 00:15:09.916 "dhchap_key": "key3" 00:15:09.916 } 00:15:09.916 } 00:15:09.916 Got JSON-RPC error response 00:15:09.916 GoRPCClient: error on JSON-RPC call 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:09.916 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:10.481 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:10.739 2024/07/15 12:59:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:10.739 request: 00:15:10.739 { 00:15:10.739 "method": "bdev_nvme_attach_controller", 00:15:10.739 "params": { 00:15:10.739 "name": "nvme0", 00:15:10.739 "trtype": "tcp", 00:15:10.739 "traddr": "10.0.0.2", 00:15:10.739 "adrfam": "ipv4", 00:15:10.739 "trsvcid": "4420", 00:15:10.739 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:10.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:15:10.739 "prchk_reftag": false, 00:15:10.739 "prchk_guard": false, 00:15:10.739 "hdgst": false, 00:15:10.739 "ddgst": false, 00:15:10.739 "dhchap_key": "key0", 00:15:10.739 "dhchap_ctrlr_key": "key1" 00:15:10.739 } 00:15:10.739 } 00:15:10.739 Got JSON-RPC error response 00:15:10.739 GoRPCClient: error on JSON-RPC call 00:15:10.739 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:10.739 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.739 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:10.739 12:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.739 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:10.739 12:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:10.996 00:15:10.996 12:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:15:10.996 12:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:15:10.996 12:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.253 12:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.253 12:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.253 12:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.509 12:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:15:11.509 12:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:15:11.509 12:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77915 00:15:11.509 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77915 ']' 00:15:11.509 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77915 00:15:11.509 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:11.509 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.509 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77915 00:15:11.767 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:11.767 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:11.767 killing process with pid 77915 00:15:11.767 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77915' 00:15:11.767 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77915 00:15:11.767 12:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77915 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # nvmfcleanup 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:12.026 rmmod nvme_tcp 00:15:12.026 rmmod nvme_fabrics 00:15:12.026 rmmod nvme_keyring 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@493 -- # '[' -n 82876 ']' 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@494 -- # killprocess 82876 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82876 ']' 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82876 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82876 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82876' 00:15:12.026 killing process with pid 82876 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82876 00:15:12.026 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82876 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@282 -- # remove_spdk_ns 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Mip /tmp/spdk.key-sha256.ayD /tmp/spdk.key-sha384.LW9 /tmp/spdk.key-sha512.gGL /tmp/spdk.key-sha512.ieJ /tmp/spdk.key-sha384.Kpb /tmp/spdk.key-sha256.9vQ '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:12.285 00:15:12.285 real 3m2.728s 00:15:12.285 user 7m25.551s 00:15:12.285 sys 0m21.855s 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:12.285 12:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.285 ************************************ 00:15:12.285 END TEST nvmf_auth_target 00:15:12.285 ************************************ 00:15:12.285 12:59:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:12.285 12:59:24 nvmf_tcp -- nvmf/nvmf.sh@63 -- # '[' tcp = tcp ']' 00:15:12.285 12:59:24 nvmf_tcp -- nvmf/nvmf.sh@64 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:12.285 12:59:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:12.285 12:59:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.285 12:59:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:12.285 ************************************ 00:15:12.285 START TEST nvmf_bdevio_no_huge 00:15:12.285 ************************************ 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:12.285 * Looking for test storage... 00:15:12.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.285 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.286 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # prepare_net_devs 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # local -g is_hw=no 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # remove_spdk_ns 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # nvmf_veth_init 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:15:12.286 Cannot find device "nvmf_tgt_br" 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.286 Cannot find device "nvmf_tgt_br2" 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # true 00:15:12.286 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:15:12.544 Cannot find device "nvmf_tgt_br" 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:15:12.544 Cannot find device "nvmf_tgt_br2" 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.544 12:59:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.544 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:15:12.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:12.803 00:15:12.803 --- 10.0.0.2 ping statistics --- 00:15:12.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.803 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:15:12.803 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.803 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:12.803 00:15:12.803 --- 10.0.0.3 ping statistics --- 00:15:12.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.803 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:15:12.803 00:15:12.803 --- 10.0.0.1 ping statistics --- 00:15:12.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.803 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@437 -- # return 0 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@485 -- # nvmfpid=83300 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@486 -- # waitforlisten 83300 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83300 ']' 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.803 12:59:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:12.803 [2024-07-15 12:59:25.121140] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:12.803 [2024-07-15 12:59:25.121257] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:12.803 [2024-07-15 12:59:25.262720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.061 [2024-07-15 12:59:25.393592] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.061 [2024-07-15 12:59:25.393656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.061 [2024-07-15 12:59:25.393670] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.061 [2024-07-15 12:59:25.393680] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.061 [2024-07-15 12:59:25.393689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.061 [2024-07-15 12:59:25.393824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:13.061 [2024-07-15 12:59:25.393918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:13.061 [2024-07-15 12:59:25.394111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:13.061 [2024-07-15 12:59:25.394122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.028 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 [2024-07-15 12:59:26.195751] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 Malloc0 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:14.029 [2024-07-15 12:59:26.233660] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@536 -- # config=() 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@536 -- # local subsystem config 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:15:14.029 { 00:15:14.029 "params": { 00:15:14.029 "name": "Nvme$subsystem", 00:15:14.029 "trtype": "$TEST_TRANSPORT", 00:15:14.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.029 "adrfam": "ipv4", 00:15:14.029 "trsvcid": "$NVMF_PORT", 00:15:14.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.029 "hdgst": ${hdgst:-false}, 00:15:14.029 "ddgst": ${ddgst:-false} 00:15:14.029 }, 00:15:14.029 "method": "bdev_nvme_attach_controller" 00:15:14.029 } 00:15:14.029 EOF 00:15:14.029 )") 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # cat 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # jq . 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@561 -- # IFS=, 00:15:14.029 12:59:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:15:14.029 "params": { 00:15:14.029 "name": "Nvme1", 00:15:14.029 "trtype": "tcp", 00:15:14.029 "traddr": "10.0.0.2", 00:15:14.029 "adrfam": "ipv4", 00:15:14.029 "trsvcid": "4420", 00:15:14.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.029 "hdgst": false, 00:15:14.029 "ddgst": false 00:15:14.029 }, 00:15:14.029 "method": "bdev_nvme_attach_controller" 00:15:14.029 }' 00:15:14.029 [2024-07-15 12:59:26.293931] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:14.029 [2024-07-15 12:59:26.294036] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83354 ] 00:15:14.029 [2024-07-15 12:59:26.442434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.287 [2024-07-15 12:59:26.605957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.287 [2024-07-15 12:59:26.606017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.287 [2024-07-15 12:59:26.606026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.545 I/O targets: 00:15:14.545 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:14.545 00:15:14.545 00:15:14.545 CUnit - A unit testing framework for C - Version 2.1-3 00:15:14.545 http://cunit.sourceforge.net/ 00:15:14.545 00:15:14.545 00:15:14.545 Suite: bdevio tests on: Nvme1n1 00:15:14.545 Test: blockdev write read block ...passed 00:15:14.545 Test: blockdev write zeroes read block ...passed 00:15:14.545 Test: blockdev write zeroes read no split ...passed 00:15:14.545 Test: blockdev write zeroes read split ...passed 00:15:14.545 Test: blockdev write zeroes read split partial ...passed 00:15:14.546 Test: blockdev reset ...[2024-07-15 12:59:26.957190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:14.546 [2024-07-15 12:59:26.957338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2426460 (9): Bad file descriptor 00:15:14.546 passed 00:15:14.546 Test: blockdev write read 8 blocks ...[2024-07-15 12:59:26.976707] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:14.546 passed 00:15:14.546 Test: blockdev write read size > 128k ...passed 00:15:14.546 Test: blockdev write read invalid size ...passed 00:15:14.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:14.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:14.804 Test: blockdev write read max offset ...passed 00:15:14.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:14.804 Test: blockdev writev readv 8 blocks ...passed 00:15:14.804 Test: blockdev writev readv 30 x 1block ...passed 00:15:14.804 Test: blockdev writev readv block ...passed 00:15:14.804 Test: blockdev writev readv size > 128k ...passed 00:15:14.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:14.804 Test: blockdev comparev and writev ...[2024-07-15 12:59:27.156058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:14.804 [2024-07-15 12:59:27.156118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:14.804 [2024-07-15 12:59:27.156146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:14.804 [2024-07-15 12:59:27.156161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:14.804 [2024-07-15 12:59:27.156618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:14.804 [2024-07-15 12:59:27.156650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:14.804 [2024-07-15 12:59:27.156673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:14.804 [2024-07-15 12:59:27.156687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:14.804 [2024-07-15 12:59:27.157024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:14.804 [2024-07-15 12:59:27.157046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:14.804 [2024-07-15 12:59:27.157068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:14.804 [2024-07-15 12:59:27.157081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:14.804 [2024-07-15 12:59:27.157391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:14.804 [2024-07-15 12:59:27.157427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:14.804 [2024-07-15 12:59:27.157450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:14.804 [2024-07-15 12:59:27.157463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:14.804 passed 00:15:14.804 Test: blockdev nvme passthru rw ...passed 00:15:14.804 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:59:27.240114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:14.804 [2024-07-15 12:59:27.240176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:14.804 passed 00:15:14.804 Test: blockdev nvme admin passthru ...[2024-07-15 12:59:27.240359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:14.804 [2024-07-15 12:59:27.240398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:14.804 [2024-07-15 12:59:27.240548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:14.804 [2024-07-15 12:59:27.240569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:14.804 [2024-07-15 12:59:27.240705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:14.804 [2024-07-15 12:59:27.240725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:14.804 passed 00:15:15.063 Test: blockdev copy ...passed 00:15:15.063 00:15:15.063 Run Summary: Type Total Ran Passed Failed Inactive 00:15:15.063 suites 1 1 n/a 0 0 00:15:15.063 tests 23 23 23 0 0 00:15:15.063 asserts 152 152 152 0 n/a 00:15:15.063 00:15:15.063 Elapsed time = 0.982 seconds 00:15:15.392 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.392 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.392 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:15.392 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.392 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:15.392 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:15.392 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # nvmfcleanup 00:15:15.392 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:15.670 rmmod nvme_tcp 00:15:15.670 rmmod nvme_fabrics 00:15:15.670 rmmod nvme_keyring 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # '[' -n 83300 ']' 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # killprocess 83300 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83300 ']' 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83300 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83300 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:15.670 killing process with pid 83300 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83300' 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83300 00:15:15.670 12:59:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83300 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@282 -- # remove_spdk_ns 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:15:15.929 ************************************ 00:15:15.929 END TEST nvmf_bdevio_no_huge 00:15:15.929 ************************************ 00:15:15.929 00:15:15.929 real 0m3.751s 00:15:15.929 user 0m14.044s 00:15:15.929 sys 0m1.455s 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.929 12:59:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:15.929 12:59:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:15.929 12:59:28 nvmf_tcp -- nvmf/nvmf.sh@65 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:15.929 12:59:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:15.929 12:59:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.929 12:59:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.187 ************************************ 00:15:16.187 START TEST nvmf_tls 00:15:16.187 ************************************ 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:16.187 * Looking for test storage... 00:15:16.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.187 12:59:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.188 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@452 -- # prepare_net_devs 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # local -g is_hw=no 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # remove_spdk_ns 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@436 -- # nvmf_veth_init 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:15:16.188 Cannot find device "nvmf_tgt_br" 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.188 Cannot find device "nvmf_tgt_br2" 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # true 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:15:16.188 Cannot find device "nvmf_tgt_br" 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:15:16.188 Cannot find device "nvmf_tgt_br2" 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.188 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.446 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:15:16.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:15:16.446 00:15:16.446 --- 10.0.0.2 ping statistics --- 00:15:16.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.446 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:15:16.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:16.447 00:15:16.447 --- 10.0.0.3 ping statistics --- 00:15:16.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.447 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:16.447 00:15:16.447 --- 10.0.0.1 ping statistics --- 00:15:16.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.447 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@437 -- # return 0 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=83536 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:16.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 83536 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83536 ']' 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.447 12:59:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.705 [2024-07-15 12:59:28.924028] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:16.705 [2024-07-15 12:59:28.924152] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.705 [2024-07-15 12:59:29.066074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.705 [2024-07-15 12:59:29.126457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.705 [2024-07-15 12:59:29.126515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.705 [2024-07-15 12:59:29.126528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.705 [2024-07-15 12:59:29.126537] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.705 [2024-07-15 12:59:29.126544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.705 [2024-07-15 12:59:29.126574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.641 12:59:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.641 12:59:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:17.641 12:59:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:15:17.641 12:59:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.641 12:59:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.641 12:59:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.641 12:59:29 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:17.641 12:59:29 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:17.899 true 00:15:17.899 12:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:17.899 12:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:18.157 12:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:18.157 12:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:18.157 12:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:18.416 12:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:18.416 12:59:30 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:18.675 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:18.675 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:18.675 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:18.932 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:19.189 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:19.446 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:19.446 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:19.447 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:19.447 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:19.705 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:15:19.705 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:15:19.705 12:59:31 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:19.963 12:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:19.963 12:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:15:20.221 12:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:15:20.221 12:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:15:20.221 12:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:20.479 12:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:20.479 12:59:32 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@706 -- # local prefix key digest 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # digest=1 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@709 -- # python - 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@706 -- # local prefix key digest 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # key=ffeeddccbbaa99887766554433221100 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # digest=1 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@709 -- # python - 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.OB5piMLiqe 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6oCvEbFdPn 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.OB5piMLiqe 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6oCvEbFdPn 00:15:20.737 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:20.994 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:21.559 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.OB5piMLiqe 00:15:21.559 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.OB5piMLiqe 00:15:21.559 12:59:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:21.818 [2024-07-15 12:59:34.059970] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.818 12:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:22.076 12:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:22.335 [2024-07-15 12:59:34.668100] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:22.335 [2024-07-15 12:59:34.668352] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.335 12:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:22.594 malloc0 00:15:22.594 12:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:22.852 12:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OB5piMLiqe 00:15:23.152 [2024-07-15 12:59:35.451892] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:23.152 12:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.OB5piMLiqe 00:15:35.383 Initializing NVMe Controllers 00:15:35.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:35.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:35.383 Initialization complete. Launching workers. 00:15:35.383 ======================================================== 00:15:35.383 Latency(us) 00:15:35.383 Device Information : IOPS MiB/s Average min max 00:15:35.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8742.76 34.15 7322.07 1442.68 212672.28 00:15:35.383 ======================================================== 00:15:35.383 Total : 8742.76 34.15 7322.07 1442.68 212672.28 00:15:35.383 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OB5piMLiqe 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OB5piMLiqe' 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83899 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83899 /var/tmp/bdevperf.sock 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83899 ']' 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.383 12:59:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.383 [2024-07-15 12:59:45.728760] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:35.383 [2024-07-15 12:59:45.728891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83899 ] 00:15:35.383 [2024-07-15 12:59:45.883648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.383 [2024-07-15 12:59:45.968986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.383 12:59:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.383 12:59:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:35.383 12:59:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OB5piMLiqe 00:15:35.383 [2024-07-15 12:59:46.979354] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:35.383 [2024-07-15 12:59:46.979485] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:35.383 TLSTESTn1 00:15:35.383 12:59:47 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:35.383 Running I/O for 10 seconds... 00:15:45.352 00:15:45.352 Latency(us) 00:15:45.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.352 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:45.352 Verification LBA range: start 0x0 length 0x2000 00:15:45.352 TLSTESTn1 : 10.02 3674.79 14.35 0.00 0.00 34763.85 7179.17 39321.60 00:15:45.352 =================================================================================================================== 00:15:45.352 Total : 3674.79 14.35 0.00 0.00 34763.85 7179.17 39321.60 00:15:45.352 0 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83899 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83899 ']' 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83899 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83899 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:45.352 killing process with pid 83899 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83899' 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83899 00:15:45.352 Received shutdown signal, test time was about 10.000000 seconds 00:15:45.352 00:15:45.352 Latency(us) 00:15:45.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.352 =================================================================================================================== 00:15:45.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:45.352 [2024-07-15 12:59:57.257514] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83899 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6oCvEbFdPn 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6oCvEbFdPn 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6oCvEbFdPn 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6oCvEbFdPn' 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84045 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84045 /var/tmp/bdevperf.sock 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84045 ']' 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.352 [2024-07-15 12:59:57.482866] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:45.352 [2024-07-15 12:59:57.482972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84045 ] 00:15:45.352 [2024-07-15 12:59:57.623296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.352 [2024-07-15 12:59:57.692423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:45.352 12:59:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6oCvEbFdPn 00:15:45.610 [2024-07-15 12:59:58.069189] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:45.610 [2024-07-15 12:59:58.069353] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:45.610 [2024-07-15 12:59:58.074589] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:45.610 [2024-07-15 12:59:58.075159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7cca0 (107): Transport endpoint is not connected 00:15:45.610 [2024-07-15 12:59:58.076135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7cca0 (9): Bad file descriptor 00:15:45.610 [2024-07-15 12:59:58.077130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:45.610 [2024-07-15 12:59:58.077159] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:45.611 [2024-07-15 12:59:58.077176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:45.868 2024/07/15 12:59:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.6oCvEbFdPn subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:45.868 request: 00:15:45.868 { 00:15:45.868 "method": "bdev_nvme_attach_controller", 00:15:45.868 "params": { 00:15:45.868 "name": "TLSTEST", 00:15:45.868 "trtype": "tcp", 00:15:45.868 "traddr": "10.0.0.2", 00:15:45.868 "adrfam": "ipv4", 00:15:45.868 "trsvcid": "4420", 00:15:45.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:45.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:45.868 "prchk_reftag": false, 00:15:45.868 "prchk_guard": false, 00:15:45.868 "hdgst": false, 00:15:45.868 "ddgst": false, 00:15:45.868 "psk": "/tmp/tmp.6oCvEbFdPn" 00:15:45.868 } 00:15:45.868 } 00:15:45.868 Got JSON-RPC error response 00:15:45.868 GoRPCClient: error on JSON-RPC call 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84045 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84045 ']' 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84045 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84045 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:45.868 killing process with pid 84045 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84045' 00:15:45.868 Received shutdown signal, test time was about 10.000000 seconds 00:15:45.868 00:15:45.868 Latency(us) 00:15:45.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.868 =================================================================================================================== 00:15:45.868 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84045 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84045 00:15:45.868 [2024-07-15 12:59:58.128370] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.868 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OB5piMLiqe 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OB5piMLiqe 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OB5piMLiqe 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OB5piMLiqe' 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84077 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84077 /var/tmp/bdevperf.sock 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84077 ']' 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.869 12:59:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.128 [2024-07-15 12:59:58.351433] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:46.128 [2024-07-15 12:59:58.351529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84077 ] 00:15:46.128 [2024-07-15 12:59:58.488365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.128 [2024-07-15 12:59:58.564909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.063 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.063 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:47.063 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.OB5piMLiqe 00:15:47.321 [2024-07-15 12:59:59.560298] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:47.321 [2024-07-15 12:59:59.560419] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:47.321 [2024-07-15 12:59:59.565448] tcp.c: 940:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:47.321 [2024-07-15 12:59:59.565510] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:47.321 [2024-07-15 12:59:59.565570] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:47.321 [2024-07-15 12:59:59.566127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fcca0 (107): Transport endpoint is not connected 00:15:47.321 [2024-07-15 12:59:59.567109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fcca0 (9): Bad file descriptor 00:15:47.321 [2024-07-15 12:59:59.568104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:47.322 [2024-07-15 12:59:59.568132] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:47.322 [2024-07-15 12:59:59.568149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:47.322 2024/07/15 12:59:59 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.OB5piMLiqe subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:47.322 request: 00:15:47.322 { 00:15:47.322 "method": "bdev_nvme_attach_controller", 00:15:47.322 "params": { 00:15:47.322 "name": "TLSTEST", 00:15:47.322 "trtype": "tcp", 00:15:47.322 "traddr": "10.0.0.2", 00:15:47.322 "adrfam": "ipv4", 00:15:47.322 "trsvcid": "4420", 00:15:47.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:47.322 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:47.322 "prchk_reftag": false, 00:15:47.322 "prchk_guard": false, 00:15:47.322 "hdgst": false, 00:15:47.322 "ddgst": false, 00:15:47.322 "psk": "/tmp/tmp.OB5piMLiqe" 00:15:47.322 } 00:15:47.322 } 00:15:47.322 Got JSON-RPC error response 00:15:47.322 GoRPCClient: error on JSON-RPC call 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84077 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84077 ']' 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84077 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84077 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:47.322 killing process with pid 84077 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84077' 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84077 00:15:47.322 Received shutdown signal, test time was about 10.000000 seconds 00:15:47.322 00:15:47.322 Latency(us) 00:15:47.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.322 =================================================================================================================== 00:15:47.322 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84077 00:15:47.322 [2024-07-15 12:59:59.613614] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OB5piMLiqe 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OB5piMLiqe 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OB5piMLiqe 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OB5piMLiqe' 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84123 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84123 /var/tmp/bdevperf.sock 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84123 ']' 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.322 12:59:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.580 [2024-07-15 12:59:59.844086] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:47.580 [2024-07-15 12:59:59.844213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84123 ] 00:15:47.580 [2024-07-15 12:59:59.982145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.580 [2024-07-15 13:00:00.045935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.514 13:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.514 13:00:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:48.514 13:00:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OB5piMLiqe 00:15:48.774 [2024-07-15 13:00:01.099272] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:48.774 [2024-07-15 13:00:01.099452] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:48.774 [2024-07-15 13:00:01.106287] tcp.c: 940:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:48.775 [2024-07-15 13:00:01.106329] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:48.775 [2024-07-15 13:00:01.106408] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:48.775 [2024-07-15 13:00:01.106823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2413ca0 (107): Transport endpoint is not connected 00:15:48.775 [2024-07-15 13:00:01.107806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2413ca0 (9): Bad file descriptor 00:15:48.775 [2024-07-15 13:00:01.108801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:48.775 [2024-07-15 13:00:01.108829] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:48.775 [2024-07-15 13:00:01.108844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:48.775 2024/07/15 13:00:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.OB5piMLiqe subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:48.775 request: 00:15:48.775 { 00:15:48.775 "method": "bdev_nvme_attach_controller", 00:15:48.775 "params": { 00:15:48.775 "name": "TLSTEST", 00:15:48.775 "trtype": "tcp", 00:15:48.775 "traddr": "10.0.0.2", 00:15:48.775 "adrfam": "ipv4", 00:15:48.775 "trsvcid": "4420", 00:15:48.775 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:48.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:48.775 "prchk_reftag": false, 00:15:48.775 "prchk_guard": false, 00:15:48.775 "hdgst": false, 00:15:48.775 "ddgst": false, 00:15:48.775 "psk": "/tmp/tmp.OB5piMLiqe" 00:15:48.775 } 00:15:48.775 } 00:15:48.775 Got JSON-RPC error response 00:15:48.775 GoRPCClient: error on JSON-RPC call 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84123 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84123 ']' 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84123 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84123 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:48.775 killing process with pid 84123 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84123' 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84123 00:15:48.775 Received shutdown signal, test time was about 10.000000 seconds 00:15:48.775 00:15:48.775 Latency(us) 00:15:48.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.775 =================================================================================================================== 00:15:48.775 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:48.775 [2024-07-15 13:00:01.155056] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:48.775 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84123 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84163 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:49.033 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84163 /var/tmp/bdevperf.sock 00:15:49.034 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:49.034 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84163 ']' 00:15:49.034 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:49.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:49.034 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.034 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:49.034 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.034 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.034 [2024-07-15 13:00:01.379589] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:49.034 [2024-07-15 13:00:01.379724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84163 ] 00:15:49.291 [2024-07-15 13:00:01.520124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.291 [2024-07-15 13:00:01.584369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.291 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.291 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:49.291 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:49.548 [2024-07-15 13:00:01.965020] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:49.548 [2024-07-15 13:00:01.966642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2f240 (9): Bad file descriptor 00:15:49.548 [2024-07-15 13:00:01.967637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:49.548 [2024-07-15 13:00:01.967663] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:49.548 [2024-07-15 13:00:01.967677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:49.548 2024/07/15 13:00:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:49.548 request: 00:15:49.548 { 00:15:49.548 "method": "bdev_nvme_attach_controller", 00:15:49.548 "params": { 00:15:49.548 "name": "TLSTEST", 00:15:49.548 "trtype": "tcp", 00:15:49.548 "traddr": "10.0.0.2", 00:15:49.548 "adrfam": "ipv4", 00:15:49.548 "trsvcid": "4420", 00:15:49.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:49.548 "prchk_reftag": false, 00:15:49.548 "prchk_guard": false, 00:15:49.548 "hdgst": false, 00:15:49.548 "ddgst": false 00:15:49.548 } 00:15:49.548 } 00:15:49.548 Got JSON-RPC error response 00:15:49.548 GoRPCClient: error on JSON-RPC call 00:15:49.548 13:00:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84163 00:15:49.548 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84163 ']' 00:15:49.548 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84163 00:15:49.548 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:49.548 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.548 13:00:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84163 00:15:49.548 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:49.548 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:49.548 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84163' 00:15:49.548 killing process with pid 84163 00:15:49.548 Received shutdown signal, test time was about 10.000000 seconds 00:15:49.548 00:15:49.548 Latency(us) 00:15:49.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.548 =================================================================================================================== 00:15:49.548 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:49.548 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84163 00:15:49.548 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84163 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83536 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83536 ']' 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83536 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83536 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:49.812 killing process with pid 83536 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83536' 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83536 00:15:49.812 [2024-07-15 13:00:02.185761] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:49.812 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83536 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@706 -- # local prefix key digest 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # digest=2 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@709 -- # python - 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.7EX5gcjMNv 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.7EX5gcjMNv 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:15:50.070 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=84205 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 84205 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84205 ']' 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.071 13:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:50.071 [2024-07-15 13:00:02.468105] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:50.071 [2024-07-15 13:00:02.468206] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.328 [2024-07-15 13:00:02.598866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.328 [2024-07-15 13:00:02.657561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.328 [2024-07-15 13:00:02.657622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.328 [2024-07-15 13:00:02.657634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.328 [2024-07-15 13:00:02.657642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.328 [2024-07-15 13:00:02.657649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.328 [2024-07-15 13:00:02.657674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.262 13:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.262 13:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:51.262 13:00:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:15:51.262 13:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.262 13:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.262 13:00:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.262 13:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.7EX5gcjMNv 00:15:51.262 13:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7EX5gcjMNv 00:15:51.262 13:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:51.522 [2024-07-15 13:00:03.793530] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.522 13:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:52.087 13:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:52.346 [2024-07-15 13:00:04.573477] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:52.346 [2024-07-15 13:00:04.573701] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.346 13:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:52.604 malloc0 00:15:52.604 13:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7EX5gcjMNv 00:15:52.862 [2024-07-15 13:00:05.296381] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7EX5gcjMNv 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7EX5gcjMNv' 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84313 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84313 /var/tmp/bdevperf.sock 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84313 ']' 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.862 13:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.121 [2024-07-15 13:00:05.363657] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:15:53.121 [2024-07-15 13:00:05.363749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84313 ] 00:15:53.121 [2024-07-15 13:00:05.500734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.121 [2024-07-15 13:00:05.560995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.080 13:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.080 13:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:54.080 13:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7EX5gcjMNv 00:15:54.080 [2024-07-15 13:00:06.548580] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:54.080 [2024-07-15 13:00:06.548686] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:54.338 TLSTESTn1 00:15:54.338 13:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:54.338 Running I/O for 10 seconds... 00:16:06.530 00:16:06.530 Latency(us) 00:16:06.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.530 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:06.530 Verification LBA range: start 0x0 length 0x2000 00:16:06.530 TLSTESTn1 : 10.03 3641.03 14.22 0.00 0.00 35078.23 8460.10 38130.04 00:16:06.530 =================================================================================================================== 00:16:06.530 Total : 3641.03 14.22 0.00 0.00 35078.23 8460.10 38130.04 00:16:06.530 0 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84313 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84313 ']' 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84313 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84313 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:06.530 killing process with pid 84313 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84313' 00:16:06.530 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.530 00:16:06.530 Latency(us) 00:16:06.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.530 =================================================================================================================== 00:16:06.530 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84313 00:16:06.530 [2024-07-15 13:00:16.808891] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84313 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.7EX5gcjMNv 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7EX5gcjMNv 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7EX5gcjMNv 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7EX5gcjMNv 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7EX5gcjMNv' 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84465 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84465 /var/tmp/bdevperf.sock 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84465 ']' 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.530 13:00:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.530 [2024-07-15 13:00:17.024538] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:06.530 [2024-07-15 13:00:17.024632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84465 ] 00:16:06.530 [2024-07-15 13:00:17.158531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.530 [2024-07-15 13:00:17.246647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.530 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.530 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:06.530 13:00:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7EX5gcjMNv 00:16:06.530 [2024-07-15 13:00:17.561960] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:06.530 [2024-07-15 13:00:17.562041] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:06.530 [2024-07-15 13:00:17.562053] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.7EX5gcjMNv 00:16:06.531 2024/07/15 13:00:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.7EX5gcjMNv subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:16:06.531 request: 00:16:06.531 { 00:16:06.531 "method": "bdev_nvme_attach_controller", 00:16:06.531 "params": { 00:16:06.531 "name": "TLSTEST", 00:16:06.531 "trtype": "tcp", 00:16:06.531 "traddr": "10.0.0.2", 00:16:06.531 "adrfam": "ipv4", 00:16:06.531 "trsvcid": "4420", 00:16:06.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:06.531 "prchk_reftag": false, 00:16:06.531 "prchk_guard": false, 00:16:06.531 "hdgst": false, 00:16:06.531 "ddgst": false, 00:16:06.531 "psk": "/tmp/tmp.7EX5gcjMNv" 00:16:06.531 } 00:16:06.531 } 00:16:06.531 Got JSON-RPC error response 00:16:06.531 GoRPCClient: error on JSON-RPC call 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84465 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84465 ']' 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84465 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84465 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:06.531 killing process with pid 84465 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84465' 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84465 00:16:06.531 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.531 00:16:06.531 Latency(us) 00:16:06.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.531 =================================================================================================================== 00:16:06.531 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84465 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84205 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84205 ']' 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84205 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84205 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:06.531 killing process with pid 84205 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84205' 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84205 00:16:06.531 [2024-07-15 13:00:17.790324] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84205 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=84498 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 84498 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84498 ']' 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.531 13:00:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.531 [2024-07-15 13:00:18.012368] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:06.531 [2024-07-15 13:00:18.012463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.531 [2024-07-15 13:00:18.152415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.531 [2024-07-15 13:00:18.208865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.531 [2024-07-15 13:00:18.208918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.531 [2024-07-15 13:00:18.208930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.531 [2024-07-15 13:00:18.208939] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.531 [2024-07-15 13:00:18.208946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.531 [2024-07-15 13:00:18.208971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.7EX5gcjMNv 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.7EX5gcjMNv 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.7EX5gcjMNv 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7EX5gcjMNv 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:06.531 [2024-07-15 13:00:18.582780] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:06.531 13:00:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:06.789 [2024-07-15 13:00:19.134809] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:06.789 [2024-07-15 13:00:19.135042] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.789 13:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:07.047 malloc0 00:16:07.047 13:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:07.304 13:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7EX5gcjMNv 00:16:07.562 [2024-07-15 13:00:19.953642] tcp.c:3661:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:07.562 [2024-07-15 13:00:19.953686] tcp.c:3747:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:07.562 [2024-07-15 13:00:19.953719] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:07.562 2024/07/15 13:00:19 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.7EX5gcjMNv], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:07.562 request: 00:16:07.562 { 00:16:07.562 "method": "nvmf_subsystem_add_host", 00:16:07.562 "params": { 00:16:07.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.562 "host": "nqn.2016-06.io.spdk:host1", 00:16:07.562 "psk": "/tmp/tmp.7EX5gcjMNv" 00:16:07.562 } 00:16:07.562 } 00:16:07.562 Got JSON-RPC error response 00:16:07.562 GoRPCClient: error on JSON-RPC call 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84498 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84498 ']' 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84498 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.562 13:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84498 00:16:07.562 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:07.562 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:07.562 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84498' 00:16:07.562 killing process with pid 84498 00:16:07.562 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84498 00:16:07.562 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84498 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.7EX5gcjMNv 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=84596 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 84596 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84596 ']' 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.831 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.831 [2024-07-15 13:00:20.233983] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:07.831 [2024-07-15 13:00:20.234076] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.092 [2024-07-15 13:00:20.364466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.092 [2024-07-15 13:00:20.431634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.092 [2024-07-15 13:00:20.431703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.092 [2024-07-15 13:00:20.431716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.092 [2024-07-15 13:00:20.431725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.092 [2024-07-15 13:00:20.431733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.092 [2024-07-15 13:00:20.431757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.092 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.092 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:08.092 13:00:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:16:08.092 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.092 13:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.092 13:00:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.092 13:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.7EX5gcjMNv 00:16:08.092 13:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7EX5gcjMNv 00:16:08.092 13:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:08.658 [2024-07-15 13:00:20.831670] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.658 13:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:08.915 13:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:09.173 [2024-07-15 13:00:21.419788] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:09.173 [2024-07-15 13:00:21.420004] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.173 13:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:09.430 malloc0 00:16:09.430 13:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:09.688 13:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7EX5gcjMNv 00:16:09.945 [2024-07-15 13:00:22.202647] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84686 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84686 /var/tmp/bdevperf.sock 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84686 ']' 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.945 13:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:09.945 [2024-07-15 13:00:22.273552] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:09.945 [2024-07-15 13:00:22.273656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84686 ] 00:16:09.945 [2024-07-15 13:00:22.405523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.203 [2024-07-15 13:00:22.477395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.203 13:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.203 13:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:10.203 13:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7EX5gcjMNv 00:16:10.460 [2024-07-15 13:00:22.852746] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:10.460 [2024-07-15 13:00:22.852869] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:10.460 TLSTESTn1 00:16:10.719 13:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:10.983 13:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:10.983 "subsystems": [ 00:16:10.983 { 00:16:10.983 "subsystem": "keyring", 00:16:10.983 "config": [] 00:16:10.983 }, 00:16:10.983 { 00:16:10.983 "subsystem": "iobuf", 00:16:10.983 "config": [ 00:16:10.983 { 00:16:10.983 "method": "iobuf_set_options", 00:16:10.983 "params": { 00:16:10.983 "large_bufsize": 135168, 00:16:10.983 "large_pool_count": 1024, 00:16:10.983 "small_bufsize": 8192, 00:16:10.983 "small_pool_count": 8192 00:16:10.983 } 00:16:10.983 } 00:16:10.983 ] 00:16:10.983 }, 00:16:10.983 { 00:16:10.983 "subsystem": "sock", 00:16:10.983 "config": [ 00:16:10.983 { 00:16:10.983 "method": "sock_set_default_impl", 00:16:10.983 "params": { 00:16:10.983 "impl_name": "posix" 00:16:10.983 } 00:16:10.983 }, 00:16:10.983 { 00:16:10.983 "method": "sock_impl_set_options", 00:16:10.983 "params": { 00:16:10.983 "enable_ktls": false, 00:16:10.983 "enable_placement_id": 0, 00:16:10.983 "enable_quickack": false, 00:16:10.983 "enable_recv_pipe": true, 00:16:10.983 "enable_zerocopy_send_client": false, 00:16:10.983 "enable_zerocopy_send_server": true, 00:16:10.984 "impl_name": "ssl", 00:16:10.984 "recv_buf_size": 4096, 00:16:10.984 "send_buf_size": 4096, 00:16:10.984 "tls_version": 0, 00:16:10.984 "zerocopy_threshold": 0 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "sock_impl_set_options", 00:16:10.984 "params": { 00:16:10.984 "enable_ktls": false, 00:16:10.984 "enable_placement_id": 0, 00:16:10.984 "enable_quickack": false, 00:16:10.984 "enable_recv_pipe": true, 00:16:10.984 "enable_zerocopy_send_client": false, 00:16:10.984 "enable_zerocopy_send_server": true, 00:16:10.984 "impl_name": "posix", 00:16:10.984 "recv_buf_size": 2097152, 00:16:10.984 "send_buf_size": 2097152, 00:16:10.984 "tls_version": 0, 00:16:10.984 "zerocopy_threshold": 0 00:16:10.984 } 00:16:10.984 } 00:16:10.984 ] 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "subsystem": "vmd", 00:16:10.984 "config": [] 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "subsystem": "accel", 00:16:10.984 "config": [ 00:16:10.984 { 00:16:10.984 "method": "accel_set_options", 00:16:10.984 "params": { 00:16:10.984 "buf_count": 2048, 00:16:10.984 "large_cache_size": 16, 00:16:10.984 "sequence_count": 2048, 00:16:10.984 "small_cache_size": 128, 00:16:10.984 "task_count": 2048 00:16:10.984 } 00:16:10.984 } 00:16:10.984 ] 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "subsystem": "bdev", 00:16:10.984 "config": [ 00:16:10.984 { 00:16:10.984 "method": "bdev_set_options", 00:16:10.984 "params": { 00:16:10.984 "bdev_auto_examine": true, 00:16:10.984 "bdev_io_cache_size": 256, 00:16:10.984 "bdev_io_pool_size": 65535, 00:16:10.984 "iobuf_large_cache_size": 16, 00:16:10.984 "iobuf_small_cache_size": 128 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "bdev_raid_set_options", 00:16:10.984 "params": { 00:16:10.984 "process_window_size_kb": 1024 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "bdev_iscsi_set_options", 00:16:10.984 "params": { 00:16:10.984 "timeout_sec": 30 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "bdev_nvme_set_options", 00:16:10.984 "params": { 00:16:10.984 "action_on_timeout": "none", 00:16:10.984 "allow_accel_sequence": false, 00:16:10.984 "arbitration_burst": 0, 00:16:10.984 "bdev_retry_count": 3, 00:16:10.984 "ctrlr_loss_timeout_sec": 0, 00:16:10.984 "delay_cmd_submit": true, 00:16:10.984 "dhchap_dhgroups": [ 00:16:10.984 "null", 00:16:10.984 "ffdhe2048", 00:16:10.984 "ffdhe3072", 00:16:10.984 "ffdhe4096", 00:16:10.984 "ffdhe6144", 00:16:10.984 "ffdhe8192" 00:16:10.984 ], 00:16:10.984 "dhchap_digests": [ 00:16:10.984 "sha256", 00:16:10.984 "sha384", 00:16:10.984 "sha512" 00:16:10.984 ], 00:16:10.984 "disable_auto_failback": false, 00:16:10.984 "fast_io_fail_timeout_sec": 0, 00:16:10.984 "generate_uuids": false, 00:16:10.984 "high_priority_weight": 0, 00:16:10.984 "io_path_stat": false, 00:16:10.984 "io_queue_requests": 0, 00:16:10.984 "keep_alive_timeout_ms": 10000, 00:16:10.984 "low_priority_weight": 0, 00:16:10.984 "medium_priority_weight": 0, 00:16:10.984 "nvme_adminq_poll_period_us": 10000, 00:16:10.984 "nvme_error_stat": false, 00:16:10.984 "nvme_ioq_poll_period_us": 0, 00:16:10.984 "rdma_cm_event_timeout_ms": 0, 00:16:10.984 "rdma_max_cq_size": 0, 00:16:10.984 "rdma_srq_size": 0, 00:16:10.984 "reconnect_delay_sec": 0, 00:16:10.984 "timeout_admin_us": 0, 00:16:10.984 "timeout_us": 0, 00:16:10.984 "transport_ack_timeout": 0, 00:16:10.984 "transport_retry_count": 4, 00:16:10.984 "transport_tos": 0 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "bdev_nvme_set_hotplug", 00:16:10.984 "params": { 00:16:10.984 "enable": false, 00:16:10.984 "period_us": 100000 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "bdev_malloc_create", 00:16:10.984 "params": { 00:16:10.984 "block_size": 4096, 00:16:10.984 "name": "malloc0", 00:16:10.984 "num_blocks": 8192, 00:16:10.984 "optimal_io_boundary": 0, 00:16:10.984 "physical_block_size": 4096, 00:16:10.984 "uuid": "1257e1de-03b9-4be5-b123-6a5a2e083a54" 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "bdev_wait_for_examine" 00:16:10.984 } 00:16:10.984 ] 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "subsystem": "nbd", 00:16:10.984 "config": [] 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "subsystem": "scheduler", 00:16:10.984 "config": [ 00:16:10.984 { 00:16:10.984 "method": "framework_set_scheduler", 00:16:10.984 "params": { 00:16:10.984 "name": "static" 00:16:10.984 } 00:16:10.984 } 00:16:10.984 ] 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "subsystem": "nvmf", 00:16:10.984 "config": [ 00:16:10.984 { 00:16:10.984 "method": "nvmf_set_config", 00:16:10.984 "params": { 00:16:10.984 "admin_cmd_passthru": { 00:16:10.984 "identify_ctrlr": false 00:16:10.984 }, 00:16:10.984 "discovery_filter": "match_any" 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "nvmf_set_max_subsystems", 00:16:10.984 "params": { 00:16:10.984 "max_subsystems": 1024 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "nvmf_set_crdt", 00:16:10.984 "params": { 00:16:10.984 "crdt1": 0, 00:16:10.984 "crdt2": 0, 00:16:10.984 "crdt3": 0 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "nvmf_create_transport", 00:16:10.984 "params": { 00:16:10.984 "abort_timeout_sec": 1, 00:16:10.984 "ack_timeout": 0, 00:16:10.984 "buf_cache_size": 4294967295, 00:16:10.984 "c2h_success": false, 00:16:10.984 "data_wr_pool_size": 0, 00:16:10.984 "dif_insert_or_strip": false, 00:16:10.984 "in_capsule_data_size": 4096, 00:16:10.984 "io_unit_size": 131072, 00:16:10.984 "max_aq_depth": 128, 00:16:10.984 "max_io_qpairs_per_ctrlr": 127, 00:16:10.984 "max_io_size": 131072, 00:16:10.984 "max_queue_depth": 128, 00:16:10.984 "num_shared_buffers": 511, 00:16:10.984 "sock_priority": 0, 00:16:10.984 "trtype": "TCP", 00:16:10.984 "zcopy": false 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "nvmf_create_subsystem", 00:16:10.984 "params": { 00:16:10.984 "allow_any_host": false, 00:16:10.984 "ana_reporting": false, 00:16:10.984 "max_cntlid": 65519, 00:16:10.984 "max_namespaces": 10, 00:16:10.984 "min_cntlid": 1, 00:16:10.984 "model_number": "SPDK bdev Controller", 00:16:10.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.984 "serial_number": "SPDK00000000000001" 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "nvmf_subsystem_add_host", 00:16:10.984 "params": { 00:16:10.984 "host": "nqn.2016-06.io.spdk:host1", 00:16:10.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.984 "psk": "/tmp/tmp.7EX5gcjMNv" 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "nvmf_subsystem_add_ns", 00:16:10.984 "params": { 00:16:10.984 "namespace": { 00:16:10.984 "bdev_name": "malloc0", 00:16:10.984 "nguid": "1257E1DE03B94BE5B1236A5A2E083A54", 00:16:10.984 "no_auto_visible": false, 00:16:10.984 "nsid": 1, 00:16:10.984 "uuid": "1257e1de-03b9-4be5-b123-6a5a2e083a54" 00:16:10.984 }, 00:16:10.984 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:10.984 } 00:16:10.984 }, 00:16:10.984 { 00:16:10.984 "method": "nvmf_subsystem_add_listener", 00:16:10.984 "params": { 00:16:10.984 "listen_address": { 00:16:10.984 "adrfam": "IPv4", 00:16:10.984 "traddr": "10.0.0.2", 00:16:10.984 "trsvcid": "4420", 00:16:10.984 "trtype": "TCP" 00:16:10.984 }, 00:16:10.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.984 "secure_channel": true 00:16:10.984 } 00:16:10.984 } 00:16:10.984 ] 00:16:10.984 } 00:16:10.984 ] 00:16:10.984 }' 00:16:10.984 13:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:11.241 13:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:11.241 "subsystems": [ 00:16:11.241 { 00:16:11.241 "subsystem": "keyring", 00:16:11.241 "config": [] 00:16:11.241 }, 00:16:11.241 { 00:16:11.241 "subsystem": "iobuf", 00:16:11.241 "config": [ 00:16:11.241 { 00:16:11.241 "method": "iobuf_set_options", 00:16:11.241 "params": { 00:16:11.241 "large_bufsize": 135168, 00:16:11.241 "large_pool_count": 1024, 00:16:11.241 "small_bufsize": 8192, 00:16:11.241 "small_pool_count": 8192 00:16:11.241 } 00:16:11.241 } 00:16:11.241 ] 00:16:11.241 }, 00:16:11.241 { 00:16:11.241 "subsystem": "sock", 00:16:11.241 "config": [ 00:16:11.241 { 00:16:11.241 "method": "sock_set_default_impl", 00:16:11.241 "params": { 00:16:11.241 "impl_name": "posix" 00:16:11.241 } 00:16:11.241 }, 00:16:11.241 { 00:16:11.241 "method": "sock_impl_set_options", 00:16:11.241 "params": { 00:16:11.241 "enable_ktls": false, 00:16:11.241 "enable_placement_id": 0, 00:16:11.241 "enable_quickack": false, 00:16:11.241 "enable_recv_pipe": true, 00:16:11.241 "enable_zerocopy_send_client": false, 00:16:11.241 "enable_zerocopy_send_server": true, 00:16:11.241 "impl_name": "ssl", 00:16:11.241 "recv_buf_size": 4096, 00:16:11.241 "send_buf_size": 4096, 00:16:11.241 "tls_version": 0, 00:16:11.241 "zerocopy_threshold": 0 00:16:11.241 } 00:16:11.241 }, 00:16:11.241 { 00:16:11.241 "method": "sock_impl_set_options", 00:16:11.241 "params": { 00:16:11.241 "enable_ktls": false, 00:16:11.241 "enable_placement_id": 0, 00:16:11.241 "enable_quickack": false, 00:16:11.241 "enable_recv_pipe": true, 00:16:11.241 "enable_zerocopy_send_client": false, 00:16:11.241 "enable_zerocopy_send_server": true, 00:16:11.241 "impl_name": "posix", 00:16:11.241 "recv_buf_size": 2097152, 00:16:11.241 "send_buf_size": 2097152, 00:16:11.241 "tls_version": 0, 00:16:11.241 "zerocopy_threshold": 0 00:16:11.241 } 00:16:11.241 } 00:16:11.241 ] 00:16:11.241 }, 00:16:11.241 { 00:16:11.241 "subsystem": "vmd", 00:16:11.241 "config": [] 00:16:11.241 }, 00:16:11.241 { 00:16:11.241 "subsystem": "accel", 00:16:11.241 "config": [ 00:16:11.241 { 00:16:11.241 "method": "accel_set_options", 00:16:11.241 "params": { 00:16:11.241 "buf_count": 2048, 00:16:11.241 "large_cache_size": 16, 00:16:11.241 "sequence_count": 2048, 00:16:11.241 "small_cache_size": 128, 00:16:11.241 "task_count": 2048 00:16:11.241 } 00:16:11.241 } 00:16:11.241 ] 00:16:11.241 }, 00:16:11.241 { 00:16:11.241 "subsystem": "bdev", 00:16:11.241 "config": [ 00:16:11.241 { 00:16:11.241 "method": "bdev_set_options", 00:16:11.241 "params": { 00:16:11.241 "bdev_auto_examine": true, 00:16:11.241 "bdev_io_cache_size": 256, 00:16:11.242 "bdev_io_pool_size": 65535, 00:16:11.242 "iobuf_large_cache_size": 16, 00:16:11.242 "iobuf_small_cache_size": 128 00:16:11.242 } 00:16:11.242 }, 00:16:11.242 { 00:16:11.242 "method": "bdev_raid_set_options", 00:16:11.242 "params": { 00:16:11.242 "process_window_size_kb": 1024 00:16:11.242 } 00:16:11.242 }, 00:16:11.242 { 00:16:11.242 "method": "bdev_iscsi_set_options", 00:16:11.242 "params": { 00:16:11.242 "timeout_sec": 30 00:16:11.242 } 00:16:11.242 }, 00:16:11.242 { 00:16:11.242 "method": "bdev_nvme_set_options", 00:16:11.242 "params": { 00:16:11.242 "action_on_timeout": "none", 00:16:11.242 "allow_accel_sequence": false, 00:16:11.242 "arbitration_burst": 0, 00:16:11.242 "bdev_retry_count": 3, 00:16:11.242 "ctrlr_loss_timeout_sec": 0, 00:16:11.242 "delay_cmd_submit": true, 00:16:11.242 "dhchap_dhgroups": [ 00:16:11.242 "null", 00:16:11.242 "ffdhe2048", 00:16:11.242 "ffdhe3072", 00:16:11.242 "ffdhe4096", 00:16:11.242 "ffdhe6144", 00:16:11.242 "ffdhe8192" 00:16:11.242 ], 00:16:11.242 "dhchap_digests": [ 00:16:11.242 "sha256", 00:16:11.242 "sha384", 00:16:11.242 "sha512" 00:16:11.242 ], 00:16:11.242 "disable_auto_failback": false, 00:16:11.242 "fast_io_fail_timeout_sec": 0, 00:16:11.242 "generate_uuids": false, 00:16:11.242 "high_priority_weight": 0, 00:16:11.242 "io_path_stat": false, 00:16:11.242 "io_queue_requests": 512, 00:16:11.242 "keep_alive_timeout_ms": 10000, 00:16:11.242 "low_priority_weight": 0, 00:16:11.242 "medium_priority_weight": 0, 00:16:11.242 "nvme_adminq_poll_period_us": 10000, 00:16:11.242 "nvme_error_stat": false, 00:16:11.242 "nvme_ioq_poll_period_us": 0, 00:16:11.242 "rdma_cm_event_timeout_ms": 0, 00:16:11.242 "rdma_max_cq_size": 0, 00:16:11.242 "rdma_srq_size": 0, 00:16:11.242 "reconnect_delay_sec": 0, 00:16:11.242 "timeout_admin_us": 0, 00:16:11.242 "timeout_us": 0, 00:16:11.242 "transport_ack_timeout": 0, 00:16:11.242 "transport_retry_count": 4, 00:16:11.242 "transport_tos": 0 00:16:11.242 } 00:16:11.242 }, 00:16:11.242 { 00:16:11.242 "method": "bdev_nvme_attach_controller", 00:16:11.242 "params": { 00:16:11.242 "adrfam": "IPv4", 00:16:11.242 "ctrlr_loss_timeout_sec": 0, 00:16:11.242 "ddgst": false, 00:16:11.242 "fast_io_fail_timeout_sec": 0, 00:16:11.242 "hdgst": false, 00:16:11.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:11.242 "name": "TLSTEST", 00:16:11.242 "prchk_guard": false, 00:16:11.242 "prchk_reftag": false, 00:16:11.242 "psk": "/tmp/tmp.7EX5gcjMNv", 00:16:11.242 "reconnect_delay_sec": 0, 00:16:11.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.242 "traddr": "10.0.0.2", 00:16:11.242 "trsvcid": "4420", 00:16:11.242 "trtype": "TCP" 00:16:11.242 } 00:16:11.242 }, 00:16:11.242 { 00:16:11.242 "method": "bdev_nvme_set_hotplug", 00:16:11.242 "params": { 00:16:11.242 "enable": false, 00:16:11.242 "period_us": 100000 00:16:11.242 } 00:16:11.242 }, 00:16:11.242 { 00:16:11.242 "method": "bdev_wait_for_examine" 00:16:11.242 } 00:16:11.242 ] 00:16:11.242 }, 00:16:11.242 { 00:16:11.242 "subsystem": "nbd", 00:16:11.242 "config": [] 00:16:11.242 } 00:16:11.242 ] 00:16:11.242 }' 00:16:11.242 13:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84686 00:16:11.242 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84686 ']' 00:16:11.242 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84686 00:16:11.242 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:11.242 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.242 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84686 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:11.500 killing process with pid 84686 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84686' 00:16:11.500 Received shutdown signal, test time was about 10.000000 seconds 00:16:11.500 00:16:11.500 Latency(us) 00:16:11.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.500 =================================================================================================================== 00:16:11.500 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84686 00:16:11.500 [2024-07-15 13:00:23.712348] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84686 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84596 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84596 ']' 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84596 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84596 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:11.500 killing process with pid 84596 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84596' 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84596 00:16:11.500 [2024-07-15 13:00:23.898502] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:11.500 13:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84596 00:16:11.757 13:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:11.757 13:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:16:11.757 13:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.757 13:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.757 13:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:11.757 "subsystems": [ 00:16:11.757 { 00:16:11.757 "subsystem": "keyring", 00:16:11.757 "config": [] 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "subsystem": "iobuf", 00:16:11.757 "config": [ 00:16:11.757 { 00:16:11.757 "method": "iobuf_set_options", 00:16:11.757 "params": { 00:16:11.757 "large_bufsize": 135168, 00:16:11.757 "large_pool_count": 1024, 00:16:11.757 "small_bufsize": 8192, 00:16:11.757 "small_pool_count": 8192 00:16:11.757 } 00:16:11.757 } 00:16:11.757 ] 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "subsystem": "sock", 00:16:11.757 "config": [ 00:16:11.757 { 00:16:11.757 "method": "sock_set_default_impl", 00:16:11.757 "params": { 00:16:11.757 "impl_name": "posix" 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "sock_impl_set_options", 00:16:11.757 "params": { 00:16:11.757 "enable_ktls": false, 00:16:11.757 "enable_placement_id": 0, 00:16:11.757 "enable_quickack": false, 00:16:11.757 "enable_recv_pipe": true, 00:16:11.757 "enable_zerocopy_send_client": false, 00:16:11.757 "enable_zerocopy_send_server": true, 00:16:11.757 "impl_name": "ssl", 00:16:11.757 "recv_buf_size": 4096, 00:16:11.757 "send_buf_size": 4096, 00:16:11.757 "tls_version": 0, 00:16:11.757 "zerocopy_threshold": 0 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "sock_impl_set_options", 00:16:11.757 "params": { 00:16:11.757 "enable_ktls": false, 00:16:11.757 "enable_placement_id": 0, 00:16:11.757 "enable_quickack": false, 00:16:11.757 "enable_recv_pipe": true, 00:16:11.757 "enable_zerocopy_send_client": false, 00:16:11.757 "enable_zerocopy_send_server": true, 00:16:11.757 "impl_name": "posix", 00:16:11.757 "recv_buf_size": 2097152, 00:16:11.757 "send_buf_size": 2097152, 00:16:11.757 "tls_version": 0, 00:16:11.757 "zerocopy_threshold": 0 00:16:11.757 } 00:16:11.757 } 00:16:11.757 ] 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "subsystem": "vmd", 00:16:11.757 "config": [] 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "subsystem": "accel", 00:16:11.757 "config": [ 00:16:11.757 { 00:16:11.757 "method": "accel_set_options", 00:16:11.757 "params": { 00:16:11.757 "buf_count": 2048, 00:16:11.757 "large_cache_size": 16, 00:16:11.757 "sequence_count": 2048, 00:16:11.757 "small_cache_size": 128, 00:16:11.757 "task_count": 2048 00:16:11.757 } 00:16:11.757 } 00:16:11.757 ] 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "subsystem": "bdev", 00:16:11.757 "config": [ 00:16:11.757 { 00:16:11.757 "method": "bdev_set_options", 00:16:11.757 "params": { 00:16:11.757 "bdev_auto_examine": true, 00:16:11.757 "bdev_io_cache_size": 256, 00:16:11.757 "bdev_io_pool_size": 65535, 00:16:11.757 "iobuf_large_cache_size": 16, 00:16:11.757 "iobuf_small_cache_size": 128 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "bdev_raid_set_options", 00:16:11.757 "params": { 00:16:11.757 "process_window_size_kb": 1024 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "bdev_iscsi_set_options", 00:16:11.757 "params": { 00:16:11.757 "timeout_sec": 30 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "bdev_nvme_set_options", 00:16:11.757 "params": { 00:16:11.757 "action_on_timeout": "none", 00:16:11.757 "allow_accel_sequence": false, 00:16:11.757 "arbitration_burst": 0, 00:16:11.757 "bdev_retry_count": 3, 00:16:11.757 "ctrlr_loss_timeout_sec": 0, 00:16:11.757 "delay_cmd_submit": true, 00:16:11.757 "dhchap_dhgroups": [ 00:16:11.757 "null", 00:16:11.757 "ffdhe2048", 00:16:11.757 "ffdhe3072", 00:16:11.757 "ffdhe4096", 00:16:11.757 "ffdhe6144", 00:16:11.757 "ffdhe8192" 00:16:11.757 ], 00:16:11.757 "dhchap_digests": [ 00:16:11.757 "sha256", 00:16:11.757 "sha384", 00:16:11.757 "sha512" 00:16:11.757 ], 00:16:11.757 "disable_auto_failback": false, 00:16:11.757 "fast_io_fail_timeout_sec": 0, 00:16:11.757 "generate_uuids": false, 00:16:11.757 "high_priority_weight": 0, 00:16:11.757 "io_path_stat": false, 00:16:11.757 "io_queue_requests": 0, 00:16:11.757 "keep_alive_timeout_ms": 10000, 00:16:11.757 "low_priority_weight": 0, 00:16:11.757 "medium_priority_weight": 0, 00:16:11.757 "nvme_adminq_poll_period_us": 10000, 00:16:11.757 "nvme_error_stat": false, 00:16:11.757 "nvme_ioq_poll_period_us": 0, 00:16:11.757 "rdma_cm_event_timeout_ms": 0, 00:16:11.757 "rdma_max_cq_size": 0, 00:16:11.757 "rdma_srq_size": 0, 00:16:11.757 "reconnect_delay_sec": 0, 00:16:11.757 "timeout_admin_us": 0, 00:16:11.757 "timeout_us": 0, 00:16:11.757 "transport_ack_timeout": 0, 00:16:11.757 "transport_retry_count": 4, 00:16:11.757 "transport_tos": 0 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "bdev_nvme_set_hotplug", 00:16:11.757 "params": { 00:16:11.757 "enable": false, 00:16:11.757 "period_us": 100000 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "bdev_malloc_create", 00:16:11.757 "params": { 00:16:11.757 "block_size": 4096, 00:16:11.757 "name": "malloc0", 00:16:11.757 "num_blocks": 8192, 00:16:11.757 "optimal_io_boundary": 0, 00:16:11.757 "physical_block_size": 4096, 00:16:11.757 "uuid": "1257e1de-03b9-4be5-b123-6a5a2e083a54" 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "bdev_wait_for_examine" 00:16:11.757 } 00:16:11.757 ] 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "subsystem": "nbd", 00:16:11.757 "config": [] 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "subsystem": "scheduler", 00:16:11.757 "config": [ 00:16:11.757 { 00:16:11.757 "method": "framework_set_scheduler", 00:16:11.757 "params": { 00:16:11.757 "name": "static" 00:16:11.757 } 00:16:11.757 } 00:16:11.757 ] 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "subsystem": "nvmf", 00:16:11.757 "config": [ 00:16:11.757 { 00:16:11.757 "method": "nvmf_set_config", 00:16:11.757 "params": { 00:16:11.757 "admin_cmd_passthru": { 00:16:11.757 "identify_ctrlr": false 00:16:11.757 }, 00:16:11.757 "discovery_filter": "match_any" 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "nvmf_set_max_subsystems", 00:16:11.757 "params": { 00:16:11.757 "max_subsystems": 1024 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "nvmf_set_crdt", 00:16:11.757 "params": { 00:16:11.757 "crdt1": 0, 00:16:11.757 "crdt2": 0, 00:16:11.757 "crdt3": 0 00:16:11.757 } 00:16:11.757 }, 00:16:11.757 { 00:16:11.757 "method": "nvmf_create_transport", 00:16:11.757 "params": { 00:16:11.757 "abort_timeout_sec": 1, 00:16:11.757 "ack_timeout": 0, 00:16:11.757 "buf_cache_size": 4294967295, 00:16:11.757 "c2h_success": false, 00:16:11.757 "data_wr_pool_size": 0, 00:16:11.757 "dif_insert_or_strip": false, 00:16:11.757 "in_capsule_data_size": 4096, 00:16:11.758 "io_unit_size": 131072, 00:16:11.758 "max_aq_depth": 128, 00:16:11.758 "max_io_qpairs_per_ctrlr": 127, 00:16:11.758 "max_io_size": 131072, 00:16:11.758 "max_queue_depth": 128, 00:16:11.758 "num_shared_buffers": 511, 00:16:11.758 "sock_priority": 0, 00:16:11.758 "trtype": "TCP", 00:16:11.758 "zcopy": false 00:16:11.758 } 00:16:11.758 }, 00:16:11.758 { 00:16:11.758 "method": "nvmf_create_subsystem", 00:16:11.758 "params": { 00:16:11.758 "allow_any_host": false, 00:16:11.758 "ana_reporting": false, 00:16:11.758 "max_cntlid": 65519, 00:16:11.758 "max_namespaces": 10, 00:16:11.758 "min_cntlid": 1, 00:16:11.758 "model_number": "SPDK bdev Controller", 00:16:11.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.758 "serial_number": "SPDK00000000000001" 00:16:11.758 } 00:16:11.758 }, 00:16:11.758 { 00:16:11.758 "method": "nvmf_subsystem_add_host", 00:16:11.758 "params": { 00:16:11.758 "host": "nqn.2016-06.io.spdk:host1", 00:16:11.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.758 "psk": "/tmp/tmp.7EX5gcjMNv" 00:16:11.758 } 00:16:11.758 }, 00:16:11.758 { 00:16:11.758 "method": "nvmf_subsystem_add_ns", 00:16:11.758 "params": { 00:16:11.758 "namespace": { 00:16:11.758 "bdev_name": "malloc0", 00:16:11.758 "nguid": "1257E1DE03B94BE5B1236A5A2E083A54", 00:16:11.758 "no_auto_visible": false, 00:16:11.758 "nsid": 1, 00:16:11.758 "uuid": "1257e1de-03b9-4be5-b123-6a5a2e083a54" 00:16:11.758 }, 00:16:11.758 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:11.758 } 00:16:11.758 }, 00:16:11.758 { 00:16:11.758 "method": "nvmf_subsystem_add_listener", 00:16:11.758 "params": { 00:16:11.758 "listen_address": { 00:16:11.758 "adrfam": "IPv4", 00:16:11.758 "traddr": "10.0.0.2", 00:16:11.758 "trsvcid": "4420", 00:16:11.758 "trtype": "TCP" 00:16:11.758 }, 00:16:11.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.758 "secure_channel": true 00:16:11.758 } 00:16:11.758 } 00:16:11.758 ] 00:16:11.758 } 00:16:11.758 ] 00:16:11.758 }' 00:16:11.758 13:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=84751 00:16:11.758 13:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:11.758 13:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 84751 00:16:11.758 13:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84751 ']' 00:16:11.758 13:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.758 13:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.758 13:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.758 13:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.758 13:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.758 [2024-07-15 13:00:24.128974] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:11.758 [2024-07-15 13:00:24.129074] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.014 [2024-07-15 13:00:24.267253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.014 [2024-07-15 13:00:24.324884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.014 [2024-07-15 13:00:24.324937] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.014 [2024-07-15 13:00:24.324950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.014 [2024-07-15 13:00:24.324959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.014 [2024-07-15 13:00:24.324967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.014 [2024-07-15 13:00:24.325048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.271 [2024-07-15 13:00:24.508032] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.271 [2024-07-15 13:00:24.523953] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:12.271 [2024-07-15 13:00:24.539953] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:12.271 [2024-07-15 13:00:24.540152] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84795 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84795 /var/tmp/bdevperf.sock 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84795 ']' 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:12.836 "subsystems": [ 00:16:12.836 { 00:16:12.836 "subsystem": "keyring", 00:16:12.836 "config": [] 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "subsystem": "iobuf", 00:16:12.836 "config": [ 00:16:12.836 { 00:16:12.836 "method": "iobuf_set_options", 00:16:12.836 "params": { 00:16:12.836 "large_bufsize": 135168, 00:16:12.836 "large_pool_count": 1024, 00:16:12.836 "small_bufsize": 8192, 00:16:12.836 "small_pool_count": 8192 00:16:12.836 } 00:16:12.836 } 00:16:12.836 ] 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "subsystem": "sock", 00:16:12.836 "config": [ 00:16:12.836 { 00:16:12.836 "method": "sock_set_default_impl", 00:16:12.836 "params": { 00:16:12.836 "impl_name": "posix" 00:16:12.836 } 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "method": "sock_impl_set_options", 00:16:12.836 "params": { 00:16:12.836 "enable_ktls": false, 00:16:12.836 "enable_placement_id": 0, 00:16:12.836 "enable_quickack": false, 00:16:12.836 "enable_recv_pipe": true, 00:16:12.836 "enable_zerocopy_send_client": false, 00:16:12.836 "enable_zerocopy_send_server": true, 00:16:12.836 "impl_name": "ssl", 00:16:12.836 "recv_buf_size": 4096, 00:16:12.836 "send_buf_size": 4096, 00:16:12.836 "tls_version": 0, 00:16:12.836 "zerocopy_threshold": 0 00:16:12.836 } 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "method": "sock_impl_set_options", 00:16:12.836 "params": { 00:16:12.836 "enable_ktls": false, 00:16:12.836 "enable_placement_id": 0, 00:16:12.836 "enable_quickack": false, 00:16:12.836 "enable_recv_pipe": true, 00:16:12.836 "enable_zerocopy_send_client": false, 00:16:12.836 "enable_zerocopy_send_server": true, 00:16:12.836 "impl_name": "posix", 00:16:12.836 "recv_buf_size": 2097152, 00:16:12.836 "send_buf_size": 2097152, 00:16:12.836 "tls_version": 0, 00:16:12.836 "zerocopy_threshold": 0 00:16:12.836 } 00:16:12.836 } 00:16:12.836 ] 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "subsystem": "vmd", 00:16:12.836 "config": [] 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "subsystem": "accel", 00:16:12.836 "config": [ 00:16:12.836 { 00:16:12.836 "method": "accel_set_options", 00:16:12.836 "params": { 00:16:12.836 "buf_count": 2048, 00:16:12.836 "large_cache_size": 16, 00:16:12.836 "sequence_count": 2048, 00:16:12.836 "small_cache_size": 128, 00:16:12.836 "task_count": 2048 00:16:12.836 } 00:16:12.836 } 00:16:12.836 ] 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "subsystem": "bdev", 00:16:12.836 "config": [ 00:16:12.836 { 00:16:12.836 "method": "bdev_set_options", 00:16:12.836 "params": { 00:16:12.836 "bdev_auto_examine": true, 00:16:12.836 "bdev_io_cache_size": 256, 00:16:12.836 "bdev_io_pool_size": 65535, 00:16:12.836 "iobuf_large_cache_size": 16, 00:16:12.836 "iobuf_small_cache_size": 128 00:16:12.836 } 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "method": "bdev_raid_set_options", 00:16:12.836 "params": { 00:16:12.836 "process_window_size_kb": 1024 00:16:12.836 } 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "method": "bdev_iscsi_set_options", 00:16:12.836 "params": { 00:16:12.836 "timeout_sec": 30 00:16:12.836 } 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "method": "bdev_nvme_set_options", 00:16:12.836 "params": { 00:16:12.836 "action_on_timeout": "none", 00:16:12.836 "allow_accel_sequence": false, 00:16:12.836 "arbitration_burst": 0, 00:16:12.836 "bdev_retry_count": 3, 00:16:12.836 "ctrlr_loss_timeout_sec": 0, 00:16:12.836 "delay_cmd_submit": true, 00:16:12.836 "dhchap_dhgroups": [ 00:16:12.836 "null", 00:16:12.836 "ffdhe2048", 00:16:12.836 "ffdhe3072", 00:16:12.836 "ffdhe4096", 00:16:12.836 "ffdhe6144", 00:16:12.836 "ffdhe8192" 00:16:12.836 ], 00:16:12.836 "dhchap_digests": [ 00:16:12.836 "sha256", 00:16:12.836 "sha384", 00:16:12.836 "sha512" 00:16:12.836 ], 00:16:12.836 "disable_auto_failback": false, 00:16:12.836 "fast_io_fail_timeout_sec": 0, 00:16:12.836 "generate_uuids": false, 00:16:12.836 "high_priority_weight": 0, 00:16:12.836 "io_path_stat": false, 00:16:12.836 "io_queue_requests": 512, 00:16:12.836 "keep_alive_timeout_ms": 10000, 00:16:12.836 "low_priority_weight": 0, 00:16:12.836 "medium_priority_weight": 0, 00:16:12.836 "nvme_adminq_poll_period_us": 10000, 00:16:12.836 "nvme_error_stat": false, 00:16:12.836 "nvme_ioq_poll_period_us": 0, 00:16:12.836 "rdma_cm_event_timeout_ms": 0, 00:16:12.836 "rdma_max_cq_size": 0, 00:16:12.836 "rdma_srq_size": 0, 00:16:12.836 "reconnect_delay_sec": 0, 00:16:12.836 "timeout_admin_us": 0, 00:16:12.836 "timeout_us": 0, 00:16:12.836 "transport_ack_timeout": 0, 00:16:12.836 "transport_retry_count": 4, 00:16:12.836 "transport_tos": 0 00:16:12.836 } 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "method": "bdev_nvme_attach_controller", 00:16:12.836 "params": { 00:16:12.836 "adrfam": "IPv4", 00:16:12.836 "ctrlr_loss_timeout_sec": 0, 00:16:12.836 "ddgst": false, 00:16:12.836 "fast_io_fail_timeout_sec": 0, 00:16:12.836 "hdgst": false, 00:16:12.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:12.836 "name": "TLSTEST", 00:16:12.836 "prchk_guard": false, 00:16:12.836 "prchk_reftag": false, 00:16:12.836 "psk": "/tmp/tmp.7EX5gcjMNv", 00:16:12.836 "reconnect_delay_sec": 0, 00:16:12.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.836 "traddr": "10.0.0.2", 00:16:12.836 "trsvcid": "4420", 00:16:12.836 "trtype": "TCP" 00:16:12.836 } 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "method": "bdev_nvme_set_hotplug", 00:16:12.836 "params": { 00:16:12.836 "enable": false, 00:16:12.836 "period_us": 100000 00:16:12.836 } 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "method": "bdev_wait_for_examine" 00:16:12.836 } 00:16:12.836 ] 00:16:12.836 }, 00:16:12.836 { 00:16:12.836 "subsystem": "nbd", 00:16:12.836 "config": [] 00:16:12.836 } 00:16:12.836 ] 00:16:12.836 }' 00:16:12.836 13:00:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:12.836 [2024-07-15 13:00:25.238432] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:12.836 [2024-07-15 13:00:25.238584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84795 ] 00:16:13.094 [2024-07-15 13:00:25.383736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.094 [2024-07-15 13:00:25.469731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.352 [2024-07-15 13:00:25.599315] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:13.352 [2024-07-15 13:00:25.599425] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:13.915 13:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.915 13:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:13.915 13:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:14.172 Running I/O for 10 seconds... 00:16:24.135 00:16:24.135 Latency(us) 00:16:24.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.135 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:24.135 Verification LBA range: start 0x0 length 0x2000 00:16:24.135 TLSTESTn1 : 10.02 3757.92 14.68 0.00 0.00 33995.29 7387.69 32887.16 00:16:24.135 =================================================================================================================== 00:16:24.135 Total : 3757.92 14.68 0.00 0.00 33995.29 7387.69 32887.16 00:16:24.135 0 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84795 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84795 ']' 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84795 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84795 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:24.135 killing process with pid 84795 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84795' 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84795 00:16:24.135 Received shutdown signal, test time was about 10.000000 seconds 00:16:24.135 00:16:24.135 Latency(us) 00:16:24.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.135 =================================================================================================================== 00:16:24.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:24.135 [2024-07-15 13:00:36.450454] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:24.135 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84795 00:16:24.393 13:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84751 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84751 ']' 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84751 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84751 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:24.394 killing process with pid 84751 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84751' 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84751 00:16:24.394 [2024-07-15 13:00:36.633243] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84751 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=84941 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 84941 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84941 ']' 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.394 13:00:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.651 [2024-07-15 13:00:36.875074] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:24.651 [2024-07-15 13:00:36.875219] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.651 [2024-07-15 13:00:37.021789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.651 [2024-07-15 13:00:37.093233] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.651 [2024-07-15 13:00:37.093286] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.651 [2024-07-15 13:00:37.093299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.651 [2024-07-15 13:00:37.093310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.651 [2024-07-15 13:00:37.093319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.651 [2024-07-15 13:00:37.093351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.588 13:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.588 13:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:25.588 13:00:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:16:25.588 13:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.588 13:00:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.589 13:00:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.589 13:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.7EX5gcjMNv 00:16:25.589 13:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7EX5gcjMNv 00:16:25.589 13:00:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:25.846 [2024-07-15 13:00:38.268371] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.846 13:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:26.409 13:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:26.668 [2024-07-15 13:00:38.880474] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:26.668 [2024-07-15 13:00:38.880713] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.668 13:00:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:26.950 malloc0 00:16:26.950 13:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:27.221 13:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7EX5gcjMNv 00:16:27.478 [2024-07-15 13:00:39.755635] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85049 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85049 /var/tmp/bdevperf.sock 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85049 ']' 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.478 13:00:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.478 [2024-07-15 13:00:39.828894] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:27.478 [2024-07-15 13:00:39.828994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85049 ] 00:16:27.736 [2024-07-15 13:00:39.966921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.736 [2024-07-15 13:00:40.036449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.672 13:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.672 13:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:28.672 13:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7EX5gcjMNv 00:16:28.672 13:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:28.929 [2024-07-15 13:00:41.362963] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.187 nvme0n1 00:16:29.187 13:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:29.187 Running I/O for 1 seconds... 00:16:30.122 00:16:30.122 Latency(us) 00:16:30.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.122 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:30.122 Verification LBA range: start 0x0 length 0x2000 00:16:30.122 nvme0n1 : 1.02 3920.45 15.31 0.00 0.00 32337.92 5749.29 25737.77 00:16:30.122 =================================================================================================================== 00:16:30.122 Total : 3920.45 15.31 0.00 0.00 32337.92 5749.29 25737.77 00:16:30.122 0 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85049 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85049 ']' 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85049 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85049 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:30.379 killing process with pid 85049 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85049' 00:16:30.379 Received shutdown signal, test time was about 1.000000 seconds 00:16:30.379 00:16:30.379 Latency(us) 00:16:30.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.379 =================================================================================================================== 00:16:30.379 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85049 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85049 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84941 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84941 ']' 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84941 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84941 00:16:30.379 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:30.379 killing process with pid 84941 00:16:30.380 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:30.380 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84941' 00:16:30.380 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84941 00:16:30.380 [2024-07-15 13:00:42.801111] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:30.380 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84941 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=85124 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 85124 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85124 ']' 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.638 13:00:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.638 [2024-07-15 13:00:43.037240] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:30.638 [2024-07-15 13:00:43.037374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.896 [2024-07-15 13:00:43.178582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.896 [2024-07-15 13:00:43.239376] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.896 [2024-07-15 13:00:43.239438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.896 [2024-07-15 13:00:43.239451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.896 [2024-07-15 13:00:43.239460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.896 [2024-07-15 13:00:43.239467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.896 [2024-07-15 13:00:43.239499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.828 [2024-07-15 13:00:44.060543] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.828 malloc0 00:16:31.828 [2024-07-15 13:00:44.087750] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:31.828 [2024-07-15 13:00:44.087981] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=85173 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 85173 /var/tmp/bdevperf.sock 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85173 ']' 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.828 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.829 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.829 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.829 13:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.829 [2024-07-15 13:00:44.170887] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:31.829 [2024-07-15 13:00:44.170971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85173 ] 00:16:32.087 [2024-07-15 13:00:44.325003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.087 [2024-07-15 13:00:44.399335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.019 13:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.019 13:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:33.019 13:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7EX5gcjMNv 00:16:33.324 13:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:33.324 [2024-07-15 13:00:45.728105] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:33.581 nvme0n1 00:16:33.581 13:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:33.581 Running I/O for 1 seconds... 00:16:34.515 00:16:34.515 Latency(us) 00:16:34.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.516 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:34.516 Verification LBA range: start 0x0 length 0x2000 00:16:34.516 nvme0n1 : 1.02 3656.56 14.28 0.00 0.00 34531.92 2829.96 23473.80 00:16:34.516 =================================================================================================================== 00:16:34.516 Total : 3656.56 14.28 0.00 0.00 34531.92 2829.96 23473.80 00:16:34.516 0 00:16:34.516 13:00:46 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:16:34.516 13:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.516 13:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.774 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.774 13:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:16:34.774 "subsystems": [ 00:16:34.774 { 00:16:34.774 "subsystem": "keyring", 00:16:34.774 "config": [ 00:16:34.774 { 00:16:34.774 "method": "keyring_file_add_key", 00:16:34.774 "params": { 00:16:34.774 "name": "key0", 00:16:34.774 "path": "/tmp/tmp.7EX5gcjMNv" 00:16:34.774 } 00:16:34.774 } 00:16:34.774 ] 00:16:34.774 }, 00:16:34.774 { 00:16:34.774 "subsystem": "iobuf", 00:16:34.774 "config": [ 00:16:34.774 { 00:16:34.774 "method": "iobuf_set_options", 00:16:34.774 "params": { 00:16:34.774 "large_bufsize": 135168, 00:16:34.774 "large_pool_count": 1024, 00:16:34.774 "small_bufsize": 8192, 00:16:34.774 "small_pool_count": 8192 00:16:34.774 } 00:16:34.774 } 00:16:34.774 ] 00:16:34.774 }, 00:16:34.774 { 00:16:34.774 "subsystem": "sock", 00:16:34.774 "config": [ 00:16:34.774 { 00:16:34.774 "method": "sock_set_default_impl", 00:16:34.774 "params": { 00:16:34.774 "impl_name": "posix" 00:16:34.774 } 00:16:34.774 }, 00:16:34.774 { 00:16:34.774 "method": "sock_impl_set_options", 00:16:34.774 "params": { 00:16:34.774 "enable_ktls": false, 00:16:34.774 "enable_placement_id": 0, 00:16:34.774 "enable_quickack": false, 00:16:34.774 "enable_recv_pipe": true, 00:16:34.774 "enable_zerocopy_send_client": false, 00:16:34.774 "enable_zerocopy_send_server": true, 00:16:34.774 "impl_name": "ssl", 00:16:34.774 "recv_buf_size": 4096, 00:16:34.774 "send_buf_size": 4096, 00:16:34.774 "tls_version": 0, 00:16:34.774 "zerocopy_threshold": 0 00:16:34.774 } 00:16:34.774 }, 00:16:34.774 { 00:16:34.774 "method": "sock_impl_set_options", 00:16:34.774 "params": { 00:16:34.774 "enable_ktls": false, 00:16:34.774 "enable_placement_id": 0, 00:16:34.774 "enable_quickack": false, 00:16:34.774 "enable_recv_pipe": true, 00:16:34.774 "enable_zerocopy_send_client": false, 00:16:34.774 "enable_zerocopy_send_server": true, 00:16:34.775 "impl_name": "posix", 00:16:34.775 "recv_buf_size": 2097152, 00:16:34.775 "send_buf_size": 2097152, 00:16:34.775 "tls_version": 0, 00:16:34.775 "zerocopy_threshold": 0 00:16:34.775 } 00:16:34.775 } 00:16:34.775 ] 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "subsystem": "vmd", 00:16:34.775 "config": [] 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "subsystem": "accel", 00:16:34.775 "config": [ 00:16:34.775 { 00:16:34.775 "method": "accel_set_options", 00:16:34.775 "params": { 00:16:34.775 "buf_count": 2048, 00:16:34.775 "large_cache_size": 16, 00:16:34.775 "sequence_count": 2048, 00:16:34.775 "small_cache_size": 128, 00:16:34.775 "task_count": 2048 00:16:34.775 } 00:16:34.775 } 00:16:34.775 ] 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "subsystem": "bdev", 00:16:34.775 "config": [ 00:16:34.775 { 00:16:34.775 "method": "bdev_set_options", 00:16:34.775 "params": { 00:16:34.775 "bdev_auto_examine": true, 00:16:34.775 "bdev_io_cache_size": 256, 00:16:34.775 "bdev_io_pool_size": 65535, 00:16:34.775 "iobuf_large_cache_size": 16, 00:16:34.775 "iobuf_small_cache_size": 128 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "bdev_raid_set_options", 00:16:34.775 "params": { 00:16:34.775 "process_window_size_kb": 1024 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "bdev_iscsi_set_options", 00:16:34.775 "params": { 00:16:34.775 "timeout_sec": 30 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "bdev_nvme_set_options", 00:16:34.775 "params": { 00:16:34.775 "action_on_timeout": "none", 00:16:34.775 "allow_accel_sequence": false, 00:16:34.775 "arbitration_burst": 0, 00:16:34.775 "bdev_retry_count": 3, 00:16:34.775 "ctrlr_loss_timeout_sec": 0, 00:16:34.775 "delay_cmd_submit": true, 00:16:34.775 "dhchap_dhgroups": [ 00:16:34.775 "null", 00:16:34.775 "ffdhe2048", 00:16:34.775 "ffdhe3072", 00:16:34.775 "ffdhe4096", 00:16:34.775 "ffdhe6144", 00:16:34.775 "ffdhe8192" 00:16:34.775 ], 00:16:34.775 "dhchap_digests": [ 00:16:34.775 "sha256", 00:16:34.775 "sha384", 00:16:34.775 "sha512" 00:16:34.775 ], 00:16:34.775 "disable_auto_failback": false, 00:16:34.775 "fast_io_fail_timeout_sec": 0, 00:16:34.775 "generate_uuids": false, 00:16:34.775 "high_priority_weight": 0, 00:16:34.775 "io_path_stat": false, 00:16:34.775 "io_queue_requests": 0, 00:16:34.775 "keep_alive_timeout_ms": 10000, 00:16:34.775 "low_priority_weight": 0, 00:16:34.775 "medium_priority_weight": 0, 00:16:34.775 "nvme_adminq_poll_period_us": 10000, 00:16:34.775 "nvme_error_stat": false, 00:16:34.775 "nvme_ioq_poll_period_us": 0, 00:16:34.775 "rdma_cm_event_timeout_ms": 0, 00:16:34.775 "rdma_max_cq_size": 0, 00:16:34.775 "rdma_srq_size": 0, 00:16:34.775 "reconnect_delay_sec": 0, 00:16:34.775 "timeout_admin_us": 0, 00:16:34.775 "timeout_us": 0, 00:16:34.775 "transport_ack_timeout": 0, 00:16:34.775 "transport_retry_count": 4, 00:16:34.775 "transport_tos": 0 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "bdev_nvme_set_hotplug", 00:16:34.775 "params": { 00:16:34.775 "enable": false, 00:16:34.775 "period_us": 100000 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "bdev_malloc_create", 00:16:34.775 "params": { 00:16:34.775 "block_size": 4096, 00:16:34.775 "name": "malloc0", 00:16:34.775 "num_blocks": 8192, 00:16:34.775 "optimal_io_boundary": 0, 00:16:34.775 "physical_block_size": 4096, 00:16:34.775 "uuid": "c5b810f0-908d-421c-bae6-e98c0bdf8dbc" 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "bdev_wait_for_examine" 00:16:34.775 } 00:16:34.775 ] 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "subsystem": "nbd", 00:16:34.775 "config": [] 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "subsystem": "scheduler", 00:16:34.775 "config": [ 00:16:34.775 { 00:16:34.775 "method": "framework_set_scheduler", 00:16:34.775 "params": { 00:16:34.775 "name": "static" 00:16:34.775 } 00:16:34.775 } 00:16:34.775 ] 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "subsystem": "nvmf", 00:16:34.775 "config": [ 00:16:34.775 { 00:16:34.775 "method": "nvmf_set_config", 00:16:34.775 "params": { 00:16:34.775 "admin_cmd_passthru": { 00:16:34.775 "identify_ctrlr": false 00:16:34.775 }, 00:16:34.775 "discovery_filter": "match_any" 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "nvmf_set_max_subsystems", 00:16:34.775 "params": { 00:16:34.775 "max_subsystems": 1024 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "nvmf_set_crdt", 00:16:34.775 "params": { 00:16:34.775 "crdt1": 0, 00:16:34.775 "crdt2": 0, 00:16:34.775 "crdt3": 0 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "nvmf_create_transport", 00:16:34.775 "params": { 00:16:34.775 "abort_timeout_sec": 1, 00:16:34.775 "ack_timeout": 0, 00:16:34.775 "buf_cache_size": 4294967295, 00:16:34.775 "c2h_success": false, 00:16:34.775 "data_wr_pool_size": 0, 00:16:34.775 "dif_insert_or_strip": false, 00:16:34.775 "in_capsule_data_size": 4096, 00:16:34.775 "io_unit_size": 131072, 00:16:34.775 "max_aq_depth": 128, 00:16:34.775 "max_io_qpairs_per_ctrlr": 127, 00:16:34.775 "max_io_size": 131072, 00:16:34.775 "max_queue_depth": 128, 00:16:34.775 "num_shared_buffers": 511, 00:16:34.775 "sock_priority": 0, 00:16:34.775 "trtype": "TCP", 00:16:34.775 "zcopy": false 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "nvmf_create_subsystem", 00:16:34.775 "params": { 00:16:34.775 "allow_any_host": false, 00:16:34.775 "ana_reporting": false, 00:16:34.775 "max_cntlid": 65519, 00:16:34.775 "max_namespaces": 32, 00:16:34.775 "min_cntlid": 1, 00:16:34.775 "model_number": "SPDK bdev Controller", 00:16:34.775 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.775 "serial_number": "00000000000000000000" 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "nvmf_subsystem_add_host", 00:16:34.775 "params": { 00:16:34.775 "host": "nqn.2016-06.io.spdk:host1", 00:16:34.775 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.775 "psk": "key0" 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "nvmf_subsystem_add_ns", 00:16:34.775 "params": { 00:16:34.775 "namespace": { 00:16:34.775 "bdev_name": "malloc0", 00:16:34.775 "nguid": "C5B810F0908D421CBAE6E98C0BDF8DBC", 00:16:34.775 "no_auto_visible": false, 00:16:34.775 "nsid": 1, 00:16:34.775 "uuid": "c5b810f0-908d-421c-bae6-e98c0bdf8dbc" 00:16:34.775 }, 00:16:34.775 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:34.775 } 00:16:34.775 }, 00:16:34.775 { 00:16:34.775 "method": "nvmf_subsystem_add_listener", 00:16:34.775 "params": { 00:16:34.775 "listen_address": { 00:16:34.775 "adrfam": "IPv4", 00:16:34.775 "traddr": "10.0.0.2", 00:16:34.775 "trsvcid": "4420", 00:16:34.775 "trtype": "TCP" 00:16:34.775 }, 00:16:34.775 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.775 "secure_channel": true 00:16:34.775 } 00:16:34.775 } 00:16:34.775 ] 00:16:34.775 } 00:16:34.775 ] 00:16:34.775 }' 00:16:34.775 13:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:35.034 13:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:16:35.034 "subsystems": [ 00:16:35.034 { 00:16:35.034 "subsystem": "keyring", 00:16:35.034 "config": [ 00:16:35.034 { 00:16:35.034 "method": "keyring_file_add_key", 00:16:35.034 "params": { 00:16:35.034 "name": "key0", 00:16:35.034 "path": "/tmp/tmp.7EX5gcjMNv" 00:16:35.034 } 00:16:35.034 } 00:16:35.034 ] 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "subsystem": "iobuf", 00:16:35.034 "config": [ 00:16:35.034 { 00:16:35.034 "method": "iobuf_set_options", 00:16:35.034 "params": { 00:16:35.034 "large_bufsize": 135168, 00:16:35.034 "large_pool_count": 1024, 00:16:35.034 "small_bufsize": 8192, 00:16:35.034 "small_pool_count": 8192 00:16:35.034 } 00:16:35.034 } 00:16:35.034 ] 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "subsystem": "sock", 00:16:35.034 "config": [ 00:16:35.034 { 00:16:35.034 "method": "sock_set_default_impl", 00:16:35.034 "params": { 00:16:35.034 "impl_name": "posix" 00:16:35.034 } 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "method": "sock_impl_set_options", 00:16:35.034 "params": { 00:16:35.034 "enable_ktls": false, 00:16:35.034 "enable_placement_id": 0, 00:16:35.034 "enable_quickack": false, 00:16:35.034 "enable_recv_pipe": true, 00:16:35.034 "enable_zerocopy_send_client": false, 00:16:35.034 "enable_zerocopy_send_server": true, 00:16:35.034 "impl_name": "ssl", 00:16:35.034 "recv_buf_size": 4096, 00:16:35.034 "send_buf_size": 4096, 00:16:35.034 "tls_version": 0, 00:16:35.034 "zerocopy_threshold": 0 00:16:35.034 } 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "method": "sock_impl_set_options", 00:16:35.034 "params": { 00:16:35.034 "enable_ktls": false, 00:16:35.034 "enable_placement_id": 0, 00:16:35.034 "enable_quickack": false, 00:16:35.034 "enable_recv_pipe": true, 00:16:35.034 "enable_zerocopy_send_client": false, 00:16:35.034 "enable_zerocopy_send_server": true, 00:16:35.034 "impl_name": "posix", 00:16:35.034 "recv_buf_size": 2097152, 00:16:35.034 "send_buf_size": 2097152, 00:16:35.034 "tls_version": 0, 00:16:35.034 "zerocopy_threshold": 0 00:16:35.034 } 00:16:35.034 } 00:16:35.034 ] 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "subsystem": "vmd", 00:16:35.034 "config": [] 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "subsystem": "accel", 00:16:35.034 "config": [ 00:16:35.034 { 00:16:35.034 "method": "accel_set_options", 00:16:35.034 "params": { 00:16:35.034 "buf_count": 2048, 00:16:35.034 "large_cache_size": 16, 00:16:35.034 "sequence_count": 2048, 00:16:35.034 "small_cache_size": 128, 00:16:35.034 "task_count": 2048 00:16:35.034 } 00:16:35.034 } 00:16:35.034 ] 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "subsystem": "bdev", 00:16:35.034 "config": [ 00:16:35.034 { 00:16:35.034 "method": "bdev_set_options", 00:16:35.034 "params": { 00:16:35.034 "bdev_auto_examine": true, 00:16:35.034 "bdev_io_cache_size": 256, 00:16:35.034 "bdev_io_pool_size": 65535, 00:16:35.034 "iobuf_large_cache_size": 16, 00:16:35.034 "iobuf_small_cache_size": 128 00:16:35.034 } 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "method": "bdev_raid_set_options", 00:16:35.034 "params": { 00:16:35.034 "process_window_size_kb": 1024 00:16:35.034 } 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "method": "bdev_iscsi_set_options", 00:16:35.034 "params": { 00:16:35.034 "timeout_sec": 30 00:16:35.034 } 00:16:35.034 }, 00:16:35.034 { 00:16:35.034 "method": "bdev_nvme_set_options", 00:16:35.034 "params": { 00:16:35.034 "action_on_timeout": "none", 00:16:35.034 "allow_accel_sequence": false, 00:16:35.034 "arbitration_burst": 0, 00:16:35.034 "bdev_retry_count": 3, 00:16:35.034 "ctrlr_loss_timeout_sec": 0, 00:16:35.034 "delay_cmd_submit": true, 00:16:35.034 "dhchap_dhgroups": [ 00:16:35.034 "null", 00:16:35.034 "ffdhe2048", 00:16:35.034 "ffdhe3072", 00:16:35.034 "ffdhe4096", 00:16:35.034 "ffdhe6144", 00:16:35.034 "ffdhe8192" 00:16:35.034 ], 00:16:35.034 "dhchap_digests": [ 00:16:35.034 "sha256", 00:16:35.034 "sha384", 00:16:35.034 "sha512" 00:16:35.034 ], 00:16:35.034 "disable_auto_failback": false, 00:16:35.034 "fast_io_fail_timeout_sec": 0, 00:16:35.034 "generate_uuids": false, 00:16:35.034 "high_priority_weight": 0, 00:16:35.034 "io_path_stat": false, 00:16:35.034 "io_queue_requests": 512, 00:16:35.034 "keep_alive_timeout_ms": 10000, 00:16:35.034 "low_priority_weight": 0, 00:16:35.034 "medium_priority_weight": 0, 00:16:35.034 "nvme_adminq_poll_period_us": 10000, 00:16:35.034 "nvme_error_stat": false, 00:16:35.034 "nvme_ioq_poll_period_us": 0, 00:16:35.034 "rdma_cm_event_timeout_ms": 0, 00:16:35.034 "rdma_max_cq_size": 0, 00:16:35.034 "rdma_srq_size": 0, 00:16:35.034 "reconnect_delay_sec": 0, 00:16:35.034 "timeout_admin_us": 0, 00:16:35.034 "timeout_us": 0, 00:16:35.035 "transport_ack_timeout": 0, 00:16:35.035 "transport_retry_count": 4, 00:16:35.035 "transport_tos": 0 00:16:35.035 } 00:16:35.035 }, 00:16:35.035 { 00:16:35.035 "method": "bdev_nvme_attach_controller", 00:16:35.035 "params": { 00:16:35.035 "adrfam": "IPv4", 00:16:35.035 "ctrlr_loss_timeout_sec": 0, 00:16:35.035 "ddgst": false, 00:16:35.035 "fast_io_fail_timeout_sec": 0, 00:16:35.035 "hdgst": false, 00:16:35.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.035 "name": "nvme0", 00:16:35.035 "prchk_guard": false, 00:16:35.035 "prchk_reftag": false, 00:16:35.035 "psk": "key0", 00:16:35.035 "reconnect_delay_sec": 0, 00:16:35.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.035 "traddr": "10.0.0.2", 00:16:35.035 "trsvcid": "4420", 00:16:35.035 "trtype": "TCP" 00:16:35.035 } 00:16:35.035 }, 00:16:35.035 { 00:16:35.035 "method": "bdev_nvme_set_hotplug", 00:16:35.035 "params": { 00:16:35.035 "enable": false, 00:16:35.035 "period_us": 100000 00:16:35.035 } 00:16:35.035 }, 00:16:35.035 { 00:16:35.035 "method": "bdev_enable_histogram", 00:16:35.035 "params": { 00:16:35.035 "enable": true, 00:16:35.035 "name": "nvme0n1" 00:16:35.035 } 00:16:35.035 }, 00:16:35.035 { 00:16:35.035 "method": "bdev_wait_for_examine" 00:16:35.035 } 00:16:35.035 ] 00:16:35.035 }, 00:16:35.035 { 00:16:35.035 "subsystem": "nbd", 00:16:35.035 "config": [] 00:16:35.035 } 00:16:35.035 ] 00:16:35.035 }' 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 85173 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85173 ']' 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85173 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85173 00:16:35.035 killing process with pid 85173 00:16:35.035 Received shutdown signal, test time was about 1.000000 seconds 00:16:35.035 00:16:35.035 Latency(us) 00:16:35.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.035 =================================================================================================================== 00:16:35.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85173' 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85173 00:16:35.035 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85173 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 85124 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85124 ']' 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85124 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85124 00:16:35.294 killing process with pid 85124 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85124' 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85124 00:16:35.294 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85124 00:16:35.552 13:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:16:35.552 13:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:16:35.552 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.552 13:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:16:35.552 "subsystems": [ 00:16:35.552 { 00:16:35.552 "subsystem": "keyring", 00:16:35.552 "config": [ 00:16:35.552 { 00:16:35.552 "method": "keyring_file_add_key", 00:16:35.552 "params": { 00:16:35.552 "name": "key0", 00:16:35.552 "path": "/tmp/tmp.7EX5gcjMNv" 00:16:35.552 } 00:16:35.552 } 00:16:35.552 ] 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "subsystem": "iobuf", 00:16:35.552 "config": [ 00:16:35.552 { 00:16:35.552 "method": "iobuf_set_options", 00:16:35.552 "params": { 00:16:35.552 "large_bufsize": 135168, 00:16:35.552 "large_pool_count": 1024, 00:16:35.552 "small_bufsize": 8192, 00:16:35.552 "small_pool_count": 8192 00:16:35.552 } 00:16:35.552 } 00:16:35.552 ] 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "subsystem": "sock", 00:16:35.552 "config": [ 00:16:35.552 { 00:16:35.552 "method": "sock_set_default_impl", 00:16:35.552 "params": { 00:16:35.552 "impl_name": "posix" 00:16:35.552 } 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "method": "sock_impl_set_options", 00:16:35.552 "params": { 00:16:35.552 "enable_ktls": false, 00:16:35.552 "enable_placement_id": 0, 00:16:35.552 "enable_quickack": false, 00:16:35.552 "enable_recv_pipe": true, 00:16:35.552 "enable_zerocopy_send_client": false, 00:16:35.552 "enable_zerocopy_send_server": true, 00:16:35.552 "impl_name": "ssl", 00:16:35.552 "recv_buf_size": 4096, 00:16:35.552 "send_buf_size": 4096, 00:16:35.552 "tls_version": 0, 00:16:35.552 "zerocopy_threshold": 0 00:16:35.552 } 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "method": "sock_impl_set_options", 00:16:35.552 "params": { 00:16:35.552 "enable_ktls": false, 00:16:35.552 "enable_placement_id": 0, 00:16:35.552 "enable_quickack": false, 00:16:35.552 "enable_recv_pipe": true, 00:16:35.552 "enable_zerocopy_send_client": false, 00:16:35.552 "enable_zerocopy_send_server": true, 00:16:35.552 "impl_name": "posix", 00:16:35.552 "recv_buf_size": 2097152, 00:16:35.552 "send_buf_size": 2097152, 00:16:35.552 "tls_version": 0, 00:16:35.552 "zerocopy_threshold": 0 00:16:35.552 } 00:16:35.552 } 00:16:35.552 ] 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "subsystem": "vmd", 00:16:35.552 "config": [] 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "subsystem": "accel", 00:16:35.552 "config": [ 00:16:35.552 { 00:16:35.552 "method": "accel_set_options", 00:16:35.552 "params": { 00:16:35.552 "buf_count": 2048, 00:16:35.552 "large_cache_size": 16, 00:16:35.552 "sequence_count": 2048, 00:16:35.552 "small_cache_size": 128, 00:16:35.552 "task_count": 2048 00:16:35.552 } 00:16:35.552 } 00:16:35.552 ] 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "subsystem": "bdev", 00:16:35.552 "config": [ 00:16:35.552 { 00:16:35.552 "method": "bdev_set_options", 00:16:35.552 "params": { 00:16:35.552 "bdev_auto_examine": true, 00:16:35.552 "bdev_io_cache_size": 256, 00:16:35.552 "bdev_io_pool_size": 65535, 00:16:35.552 "iobuf_large_cache_size": 16, 00:16:35.552 "iobuf_small_cache_size": 128 00:16:35.552 } 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "method": "bdev_raid_set_options", 00:16:35.552 "params": { 00:16:35.552 "process_window_size_kb": 1024 00:16:35.552 } 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "method": "bdev_iscsi_set_options", 00:16:35.552 "params": { 00:16:35.552 "timeout_sec": 30 00:16:35.552 } 00:16:35.552 }, 00:16:35.552 { 00:16:35.552 "method": "bdev_nvme_set_options", 00:16:35.552 "params": { 00:16:35.552 "action_on_timeout": "none", 00:16:35.552 "allow_accel_sequence": false, 00:16:35.552 "arbitration_burst": 0, 00:16:35.552 "bdev_retry_count": 3, 00:16:35.552 "ctrlr_loss_timeout_sec": 0, 00:16:35.552 "delay_cmd_submit": true, 00:16:35.552 "dhchap_dhgroups": [ 00:16:35.552 "null", 00:16:35.552 "ffdhe2048", 00:16:35.552 "ffdhe3072", 00:16:35.552 "ffdhe4096", 00:16:35.552 "ffdhe6144", 00:16:35.552 "ffdhe8192" 00:16:35.552 ], 00:16:35.552 "dhchap_digests": [ 00:16:35.552 "sha256", 00:16:35.552 "sha384", 00:16:35.552 "sha512" 00:16:35.552 ], 00:16:35.552 "disable_auto_failback": false, 00:16:35.553 "fast_io_fail_timeout_sec": 0, 00:16:35.553 "generate_uuids": false, 00:16:35.553 "high_priority_weight": 0, 00:16:35.553 "io_path_stat": false, 00:16:35.553 "io_queue_requests": 0, 00:16:35.553 "keep_alive_timeout_ms": 10000, 00:16:35.553 "low_priority_weight": 0, 00:16:35.553 "medium_priority_weight": 0, 00:16:35.553 "nvme_adminq_poll_period_us": 10000, 00:16:35.553 "nvme_error_stat": false, 00:16:35.553 "nvme_ioq_poll_period_us": 0, 00:16:35.553 "rdma_cm_event_timeout_ms": 0, 00:16:35.553 "rdma_max_cq_size": 0, 00:16:35.553 "rdma_srq_size": 0, 00:16:35.553 "reconnect_delay_sec": 0, 00:16:35.553 "timeout_admin_us": 0, 00:16:35.553 "timeout_us": 0, 00:16:35.553 "transport_ack_timeout": 0, 00:16:35.553 "transport_retry_count": 4, 00:16:35.553 "transport_tos": 0 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "bdev_nvme_set_hotplug", 00:16:35.553 "params": { 00:16:35.553 "enable": false, 00:16:35.553 "period_us": 100000 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "bdev_malloc_create", 00:16:35.553 "params": { 00:16:35.553 "block_size": 4096, 00:16:35.553 "name": "malloc0", 00:16:35.553 "num_blocks": 8192, 00:16:35.553 "optimal_io_boundary": 0, 00:16:35.553 "physical_block_size": 4096, 00:16:35.553 "uuid": "c5b810f0-908d-421c-bae6-e98c0bdf8dbc" 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "bdev_wait_for_examine" 00:16:35.553 } 00:16:35.553 ] 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "subsystem": "nbd", 00:16:35.553 "config": [] 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "subsystem": "scheduler", 00:16:35.553 "config": [ 00:16:35.553 { 00:16:35.553 "method": "framework_set_scheduler", 00:16:35.553 "params": { 00:16:35.553 "name": "static" 00:16:35.553 } 00:16:35.553 } 00:16:35.553 ] 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "subsystem": "nvmf", 00:16:35.553 "config": [ 00:16:35.553 { 00:16:35.553 "method": "nvmf_set_config", 00:16:35.553 "params": { 00:16:35.553 "admin_cmd_passthru": { 00:16:35.553 "identify_ctrlr": false 00:16:35.553 }, 00:16:35.553 "discovery_filter": "match_any" 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "nvmf_set_max_subsystems", 00:16:35.553 "params": { 00:16:35.553 "max_subsystems": 1024 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "nvmf_set_crdt", 00:16:35.553 "params": { 00:16:35.553 "crdt1": 0, 00:16:35.553 "crdt2": 0, 00:16:35.553 "crdt3": 0 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "nvmf_create_transport", 00:16:35.553 "params": { 00:16:35.553 "abort_timeout_sec": 1, 00:16:35.553 "ack_timeout": 0, 00:16:35.553 "buf_cache_size": 4294967295, 00:16:35.553 "c2h_success": false, 00:16:35.553 "data_wr_pool_size": 0, 00:16:35.553 "dif_insert_or_strip": false, 00:16:35.553 "in_capsule_data_size": 4096, 00:16:35.553 "io_unit_size": 131072, 00:16:35.553 "max_aq_depth": 128, 00:16:35.553 "max_io_qpairs_per_ctrlr": 127, 00:16:35.553 "max_io_size": 131072, 00:16:35.553 "max_queue_depth": 128, 00:16:35.553 "num_shared_buffers": 511, 00:16:35.553 "sock_priority": 0, 00:16:35.553 "trtype": "TCP", 00:16:35.553 "zcopy": false 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "nvmf_create_subsystem", 00:16:35.553 "params": { 00:16:35.553 "allow_any_host": false, 00:16:35.553 "ana_reporting": false, 00:16:35.553 "max_cntlid": 65519, 00:16:35.553 "max_namespaces": 32, 00:16:35.553 "min_cntlid": 1, 00:16:35.553 "model_number": "SPDK bdev Controller", 00:16:35.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.553 "serial_number": "00000000000000000000" 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "nvmf_subsystem_add_host", 00:16:35.553 "params": { 00:16:35.553 "host": "nqn.2016-06.io.spdk:host1", 00:16:35.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.553 "psk": "key0" 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "nvmf_subsystem_add_ns", 00:16:35.553 "params": { 00:16:35.553 "namespace": { 00:16:35.553 "bdev_name": "malloc0", 00:16:35.553 "nguid": "C5B810F0908D421CBAE6E98C0BDF8DBC", 00:16:35.553 "no_auto_visible": false, 00:16:35.553 "nsid": 1, 00:16:35.553 "uuid": "c5b810f0-908d-421c-bae6-e98c0bdf8dbc" 00:16:35.553 }, 00:16:35.553 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:35.553 } 00:16:35.553 }, 00:16:35.553 { 00:16:35.553 "method": "nvmf_subsystem_add_listener", 00:16:35.553 "params": { 00:16:35.553 "listen_address": { 00:16:35.553 "adrfam": "IPv4", 00:16:35.553 "traddr": "10.0.0.2", 00:16:35.553 "trsvcid": "4420", 00:16:35.553 "trtype": "TCP" 00:16:35.553 }, 00:16:35.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.553 "secure_channel": true 00:16:35.553 } 00:16:35.553 } 00:16:35.553 ] 00:16:35.553 } 00:16:35.553 ] 00:16:35.553 }' 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=85265 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 85265 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85265 ']' 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.553 13:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.553 [2024-07-15 13:00:47.902894] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:35.553 [2024-07-15 13:00:47.903045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.810 [2024-07-15 13:00:48.052643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.810 [2024-07-15 13:00:48.128344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.810 [2024-07-15 13:00:48.128403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.810 [2024-07-15 13:00:48.128418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.810 [2024-07-15 13:00:48.128428] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.810 [2024-07-15 13:00:48.128436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.810 [2024-07-15 13:00:48.128532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.068 [2024-07-15 13:00:48.325191] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.068 [2024-07-15 13:00:48.357103] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:36.068 [2024-07-15 13:00:48.357327] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=85309 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 85309 /var/tmp/bdevperf.sock 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85309 ']' 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.632 13:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:16:36.632 "subsystems": [ 00:16:36.632 { 00:16:36.632 "subsystem": "keyring", 00:16:36.632 "config": [ 00:16:36.632 { 00:16:36.632 "method": "keyring_file_add_key", 00:16:36.632 "params": { 00:16:36.632 "name": "key0", 00:16:36.632 "path": "/tmp/tmp.7EX5gcjMNv" 00:16:36.632 } 00:16:36.632 } 00:16:36.632 ] 00:16:36.632 }, 00:16:36.632 { 00:16:36.632 "subsystem": "iobuf", 00:16:36.632 "config": [ 00:16:36.632 { 00:16:36.632 "method": "iobuf_set_options", 00:16:36.632 "params": { 00:16:36.632 "large_bufsize": 135168, 00:16:36.632 "large_pool_count": 1024, 00:16:36.632 "small_bufsize": 8192, 00:16:36.632 "small_pool_count": 8192 00:16:36.632 } 00:16:36.632 } 00:16:36.632 ] 00:16:36.632 }, 00:16:36.632 { 00:16:36.632 "subsystem": "sock", 00:16:36.632 "config": [ 00:16:36.632 { 00:16:36.632 "method": "sock_set_default_impl", 00:16:36.632 "params": { 00:16:36.632 "impl_name": "posix" 00:16:36.632 } 00:16:36.632 }, 00:16:36.632 { 00:16:36.632 "method": "sock_impl_set_options", 00:16:36.632 "params": { 00:16:36.632 "enable_ktls": false, 00:16:36.632 "enable_placement_id": 0, 00:16:36.632 "enable_quickack": false, 00:16:36.632 "enable_recv_pipe": true, 00:16:36.632 "enable_zerocopy_send_client": false, 00:16:36.632 "enable_zerocopy_send_server": true, 00:16:36.632 "impl_name": "ssl", 00:16:36.632 "recv_buf_size": 4096, 00:16:36.632 "send_buf_size": 4096, 00:16:36.632 "tls_version": 0, 00:16:36.632 "zerocopy_threshold": 0 00:16:36.632 } 00:16:36.632 }, 00:16:36.632 { 00:16:36.632 "method": "sock_impl_set_options", 00:16:36.632 "params": { 00:16:36.632 "enable_ktls": false, 00:16:36.632 "enable_placement_id": 0, 00:16:36.632 "enable_quickack": false, 00:16:36.632 "enable_recv_pipe": true, 00:16:36.632 "enable_zerocopy_send_client": false, 00:16:36.632 "enable_zerocopy_send_server": true, 00:16:36.632 "impl_name": "posix", 00:16:36.632 "recv_buf_size": 2097152, 00:16:36.632 "send_buf_size": 2097152, 00:16:36.632 "tls_version": 0, 00:16:36.632 "zerocopy_threshold": 0 00:16:36.632 } 00:16:36.632 } 00:16:36.632 ] 00:16:36.632 }, 00:16:36.632 { 00:16:36.632 "subsystem": "vmd", 00:16:36.632 "config": [] 00:16:36.632 }, 00:16:36.632 { 00:16:36.632 "subsystem": "accel", 00:16:36.632 "config": [ 00:16:36.632 { 00:16:36.632 "method": "accel_set_options", 00:16:36.632 "params": { 00:16:36.632 "buf_count": 2048, 00:16:36.632 "large_cache_size": 16, 00:16:36.632 "sequence_count": 2048, 00:16:36.632 "small_cache_size": 128, 00:16:36.632 "task_count": 2048 00:16:36.632 } 00:16:36.632 } 00:16:36.632 ] 00:16:36.632 }, 00:16:36.632 { 00:16:36.632 "subsystem": "bdev", 00:16:36.632 "config": [ 00:16:36.632 { 00:16:36.633 "method": "bdev_set_options", 00:16:36.633 "params": { 00:16:36.633 "bdev_auto_examine": true, 00:16:36.633 "bdev_io_cache_size": 256, 00:16:36.633 "bdev_io_pool_size": 65535, 00:16:36.633 "iobuf_large_cache_size": 16, 00:16:36.633 "iobuf_small_cache_size": 128 00:16:36.633 } 00:16:36.633 }, 00:16:36.633 { 00:16:36.633 "method": "bdev_raid_set_options", 00:16:36.633 "params": { 00:16:36.633 "process_window_size_kb": 1024 00:16:36.633 } 00:16:36.633 }, 00:16:36.633 { 00:16:36.633 "method": "bdev_iscsi_set_options", 00:16:36.633 "params": { 00:16:36.633 "timeout_sec": 30 00:16:36.633 } 00:16:36.633 }, 00:16:36.633 { 00:16:36.633 "method": "bdev_nvme_set_options", 00:16:36.633 "params": { 00:16:36.633 "action_on_timeout": "none", 00:16:36.633 "allow_accel_sequence": false, 00:16:36.633 "arbitration_burst": 0, 00:16:36.633 "bdev_retry_count": 3, 00:16:36.633 "ctrlr_loss_timeout_sec": 0, 00:16:36.633 "delay_cmd_submit": true, 00:16:36.633 "dhchap_dhgroups": [ 00:16:36.633 "null", 00:16:36.633 "ffdhe2048", 00:16:36.633 "ffdhe3072", 00:16:36.633 "ffdhe4096", 00:16:36.633 "ffdhe6144", 00:16:36.633 "ffdhe8192" 00:16:36.633 ], 00:16:36.633 "dhchap_digests": [ 00:16:36.633 "sha256", 00:16:36.633 "sha384", 00:16:36.633 "sha512" 00:16:36.633 ], 00:16:36.633 "disable_auto_failback": false, 00:16:36.633 "fast_io_fail_timeout_sec": 0, 00:16:36.633 "generate_uuids": false, 00:16:36.633 "high_priority_weight": 0, 00:16:36.633 "io_path_stat": false, 00:16:36.633 "io_queue_requests": 512, 00:16:36.633 "keep_alive_timeout_ms": 10000, 00:16:36.633 "low_priority_weight": 0, 00:16:36.633 "medium_priority_weight": 0, 00:16:36.633 "nvme_adminq_poll_period_us": 10000, 00:16:36.633 "nvme_error_stat": false, 00:16:36.633 "nvme_ioq_poll_period_us": 0, 00:16:36.633 "rdma_cm_event_timeout_ms": 0, 00:16:36.633 "rdma_max_cq_size": 0, 00:16:36.633 "rdma_srq 13:00:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.633 _size": 0, 00:16:36.633 "reconnect_delay_sec": 0, 00:16:36.633 "timeout_admin_us": 0, 00:16:36.633 "timeout_us": 0, 00:16:36.633 "transport_ack_timeout": 0, 00:16:36.633 "transport_retry_count": 4, 00:16:36.633 "transport_tos": 0 00:16:36.633 } 00:16:36.633 }, 00:16:36.633 { 00:16:36.633 "method": "bdev_nvme_attach_controller", 00:16:36.633 "params": { 00:16:36.633 "adrfam": "IPv4", 00:16:36.633 "ctrlr_loss_timeout_sec": 0, 00:16:36.633 "ddgst": false, 00:16:36.633 "fast_io_fail_timeout_sec": 0, 00:16:36.633 "hdgst": false, 00:16:36.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.633 "name": "nvme0", 00:16:36.633 "prchk_guard": false, 00:16:36.633 "prchk_reftag": false, 00:16:36.633 "psk": "key0", 00:16:36.633 "reconnect_delay_sec": 0, 00:16:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.633 "traddr": "10.0.0.2", 00:16:36.633 "trsvcid": "4420", 00:16:36.633 "trtype": "TCP" 00:16:36.633 } 00:16:36.633 }, 00:16:36.633 { 00:16:36.633 "method": "bdev_nvme_set_hotplug", 00:16:36.633 "params": { 00:16:36.633 "enable": false, 00:16:36.633 "period_us": 100000 00:16:36.633 } 00:16:36.633 }, 00:16:36.633 { 00:16:36.633 "method": "bdev_enable_histogram", 00:16:36.633 "params": { 00:16:36.633 "enable": true, 00:16:36.633 "name": "nvme0n1" 00:16:36.633 } 00:16:36.633 }, 00:16:36.633 { 00:16:36.633 "method": "bdev_wait_for_examine" 00:16:36.633 } 00:16:36.633 ] 00:16:36.633 }, 00:16:36.633 { 00:16:36.633 "subsystem": "nbd", 00:16:36.633 "config": [] 00:16:36.633 } 00:16:36.633 ] 00:16:36.633 }' 00:16:36.633 [2024-07-15 13:00:49.026375] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:36.633 [2024-07-15 13:00:49.027174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85309 ] 00:16:36.890 [2024-07-15 13:00:49.167178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.890 [2024-07-15 13:00:49.250089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.148 [2024-07-15 13:00:49.384246] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:37.713 13:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.713 13:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:37.713 13:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:37.713 13:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:16:37.971 13:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.971 13:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:38.229 Running I/O for 1 seconds... 00:16:39.161 00:16:39.161 Latency(us) 00:16:39.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.161 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:39.161 Verification LBA range: start 0x0 length 0x2000 00:16:39.161 nvme0n1 : 1.03 3733.06 14.58 0.00 0.00 33890.74 7745.16 22163.08 00:16:39.161 =================================================================================================================== 00:16:39.161 Total : 3733.06 14.58 0.00 0.00 33890.74 7745.16 22163.08 00:16:39.161 0 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:39.161 nvmf_trace.0 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85309 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85309 ']' 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85309 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85309 00:16:39.161 killing process with pid 85309 00:16:39.161 Received shutdown signal, test time was about 1.000000 seconds 00:16:39.161 00:16:39.161 Latency(us) 00:16:39.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.161 =================================================================================================================== 00:16:39.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85309' 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85309 00:16:39.161 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85309 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # nvmfcleanup 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:39.422 rmmod nvme_tcp 00:16:39.422 rmmod nvme_fabrics 00:16:39.422 rmmod nvme_keyring 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@493 -- # '[' -n 85265 ']' 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@494 -- # killprocess 85265 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85265 ']' 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85265 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.422 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85265 00:16:39.681 killing process with pid 85265 00:16:39.681 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:39.681 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:39.681 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85265' 00:16:39.681 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85265 00:16:39.681 13:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85265 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@282 -- # remove_spdk_ns 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.OB5piMLiqe /tmp/tmp.6oCvEbFdPn /tmp/tmp.7EX5gcjMNv 00:16:39.681 00:16:39.681 real 1m23.711s 00:16:39.681 user 2m14.000s 00:16:39.681 sys 0m26.963s 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.681 13:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.681 ************************************ 00:16:39.681 END TEST nvmf_tls 00:16:39.681 ************************************ 00:16:39.939 13:00:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:39.939 13:00:52 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:39.939 13:00:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:39.939 13:00:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.939 13:00:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.939 ************************************ 00:16:39.939 START TEST nvmf_fips 00:16:39.939 ************************************ 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:39.939 * Looking for test storage... 00:16:39.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:39.939 13:00:52 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.940 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:39.940 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:16:40.199 Error setting digest 00:16:40.199 00C269F68B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:40.199 00C269F68B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@452 -- # prepare_net_devs 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # local -g is_hw=no 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # remove_spdk_ns 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@436 -- # nvmf_veth_init 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:16:40.199 Cannot find device "nvmf_tgt_br" 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.199 Cannot find device "nvmf_tgt_br2" 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # true 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:16:40.199 Cannot find device "nvmf_tgt_br" 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:16:40.199 Cannot find device "nvmf_tgt_br2" 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:40.199 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:40.458 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:16:40.458 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:16:40.458 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:16:40.458 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:16:40.458 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.458 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.458 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.458 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:16:40.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:40.459 00:16:40.459 --- 10.0.0.2 ping statistics --- 00:16:40.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.459 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:16:40.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:40.459 00:16:40.459 --- 10.0.0.3 ping statistics --- 00:16:40.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.459 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:40.459 00:16:40.459 --- 10.0.0.1 ping statistics --- 00:16:40.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.459 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@437 -- # return 0 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:40.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@485 -- # nvmfpid=85591 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@486 -- # waitforlisten 85591 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85591 ']' 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.459 13:00:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:40.459 [2024-07-15 13:00:52.911322] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:40.459 [2024-07-15 13:00:52.911633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.718 [2024-07-15 13:00:53.050830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.718 [2024-07-15 13:00:53.109709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.718 [2024-07-15 13:00:53.109951] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.718 [2024-07-15 13:00:53.110434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.718 [2024-07-15 13:00:53.110667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.718 [2024-07-15 13:00:53.110868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.718 [2024-07-15 13:00:53.111103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:41.650 13:00:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.908 [2024-07-15 13:00:54.174836] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.908 [2024-07-15 13:00:54.190745] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:41.909 [2024-07-15 13:00:54.190951] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.909 [2024-07-15 13:00:54.217039] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:41.909 malloc0 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85643 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85643 /var/tmp/bdevperf.sock 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85643 ']' 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.909 13:00:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:41.909 [2024-07-15 13:00:54.354336] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:41.909 [2024-07-15 13:00:54.354728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85643 ] 00:16:42.167 [2024-07-15 13:00:54.493446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.167 [2024-07-15 13:00:54.567741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.108 13:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.108 13:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:43.108 13:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:43.108 [2024-07-15 13:00:55.502845] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:43.108 [2024-07-15 13:00:55.502964] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:43.108 TLSTESTn1 00:16:43.366 13:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:43.366 Running I/O for 10 seconds... 00:16:53.331 00:16:53.331 Latency(us) 00:16:53.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.331 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:53.331 Verification LBA range: start 0x0 length 0x2000 00:16:53.331 TLSTESTn1 : 10.03 3564.95 13.93 0.00 0.00 35824.18 7417.48 36700.16 00:16:53.331 =================================================================================================================== 00:16:53.331 Total : 3564.95 13.93 0.00 0.00 35824.18 7417.48 36700.16 00:16:53.331 0 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:53.331 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:53.331 nvmf_trace.0 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85643 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85643 ']' 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85643 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85643 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:53.590 killing process with pid 85643 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85643' 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85643 00:16:53.590 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.590 00:16:53.590 Latency(us) 00:16:53.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.590 =================================================================================================================== 00:16:53.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.590 [2024-07-15 13:01:05.882491] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:53.590 13:01:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85643 00:16:53.590 13:01:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:53.590 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # nvmfcleanup 00:16:53.590 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:53.872 rmmod nvme_tcp 00:16:53.872 rmmod nvme_fabrics 00:16:53.872 rmmod nvme_keyring 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@493 -- # '[' -n 85591 ']' 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@494 -- # killprocess 85591 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85591 ']' 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85591 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85591 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:53.872 killing process with pid 85591 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85591' 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85591 00:16:53.872 [2024-07-15 13:01:06.148364] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85591 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@282 -- # remove_spdk_ns 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.872 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.131 13:01:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:16:54.131 13:01:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:54.131 00:16:54.131 real 0m14.190s 00:16:54.131 user 0m19.322s 00:16:54.131 sys 0m5.601s 00:16:54.131 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.131 13:01:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:54.131 ************************************ 00:16:54.131 END TEST nvmf_fips 00:16:54.131 ************************************ 00:16:54.131 13:01:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:54.131 13:01:06 nvmf_tcp -- nvmf/nvmf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:54.131 13:01:06 nvmf_tcp -- nvmf/nvmf.sh@75 -- # [[ virt == phy ]] 00:16:54.131 13:01:06 nvmf_tcp -- nvmf/nvmf.sh@90 -- # timing_exit target 00:16:54.131 13:01:06 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.131 13:01:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.131 13:01:06 nvmf_tcp -- nvmf/nvmf.sh@92 -- # timing_enter host 00:16:54.131 13:01:06 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.131 13:01:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.131 13:01:06 nvmf_tcp -- nvmf/nvmf.sh@94 -- # [[ 0 -eq 0 ]] 00:16:54.131 13:01:06 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:54.131 13:01:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:54.131 13:01:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.131 13:01:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.131 ************************************ 00:16:54.131 START TEST nvmf_multicontroller 00:16:54.131 ************************************ 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:54.131 * Looking for test storage... 00:16:54.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.131 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:54.132 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@452 -- # prepare_net_devs 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # local -g is_hw=no 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # remove_spdk_ns 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@436 -- # nvmf_veth_init 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:16:54.132 Cannot find device "nvmf_tgt_br" 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:16:54.132 Cannot find device "nvmf_tgt_br2" 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # true 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:16:54.132 Cannot find device "nvmf_tgt_br" 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:16:54.132 Cannot find device "nvmf_tgt_br2" 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:54.132 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:16:54.412 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:16:54.412 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:54.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.412 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:54.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:16:54.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:16:54.413 00:16:54.413 --- 10.0.0.2 ping statistics --- 00:16:54.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.413 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:16:54.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:54.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:54.413 00:16:54.413 --- 10.0.0.3 ping statistics --- 00:16:54.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.413 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:54.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:54.413 00:16:54.413 --- 10.0.0.1 ping statistics --- 00:16:54.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.413 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@437 -- # return 0 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:16:54.413 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@485 -- # nvmfpid=86001 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@486 -- # waitforlisten 86001 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86001 ']' 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.699 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.700 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.700 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.700 13:01:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:54.700 [2024-07-15 13:01:06.930929] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:54.700 [2024-07-15 13:01:06.931035] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.700 [2024-07-15 13:01:07.069684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:54.700 [2024-07-15 13:01:07.153182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.700 [2024-07-15 13:01:07.153238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.700 [2024-07-15 13:01:07.153251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.700 [2024-07-15 13:01:07.153259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.700 [2024-07-15 13:01:07.153267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.700 [2024-07-15 13:01:07.153334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.700 [2024-07-15 13:01:07.153396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.700 [2024-07-15 13:01:07.153410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.631 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.631 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:55.631 13:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:16:55.631 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.631 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.631 13:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.631 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.631 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.631 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.631 [2024-07-15 13:01:08.098107] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 Malloc0 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 [2024-07-15 13:01:08.150099] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 [2024-07-15 13:01:08.158047] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 Malloc1 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86053 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86053 /var/tmp/bdevperf.sock 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86053 ']' 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.889 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.147 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.147 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:56.147 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:56.147 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.147 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.405 NVMe0n1 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.405 1 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.405 2024/07/15 13:01:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:56.405 request: 00:16:56.405 { 00:16:56.405 "method": "bdev_nvme_attach_controller", 00:16:56.405 "params": { 00:16:56.405 "name": "NVMe0", 00:16:56.405 "trtype": "tcp", 00:16:56.405 "traddr": "10.0.0.2", 00:16:56.405 "adrfam": "ipv4", 00:16:56.405 "trsvcid": "4420", 00:16:56.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.405 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:56.405 "hostaddr": "10.0.0.2", 00:16:56.405 "hostsvcid": "60000", 00:16:56.405 "prchk_reftag": false, 00:16:56.405 "prchk_guard": false, 00:16:56.405 "hdgst": false, 00:16:56.405 "ddgst": false 00:16:56.405 } 00:16:56.405 } 00:16:56.405 Got JSON-RPC error response 00:16:56.405 GoRPCClient: error on JSON-RPC call 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.405 2024/07/15 13:01:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:56.405 request: 00:16:56.405 { 00:16:56.405 "method": "bdev_nvme_attach_controller", 00:16:56.405 "params": { 00:16:56.405 "name": "NVMe0", 00:16:56.405 "trtype": "tcp", 00:16:56.405 "traddr": "10.0.0.2", 00:16:56.405 "adrfam": "ipv4", 00:16:56.405 "trsvcid": "4420", 00:16:56.405 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:56.405 "hostaddr": "10.0.0.2", 00:16:56.405 "hostsvcid": "60000", 00:16:56.405 "prchk_reftag": false, 00:16:56.405 "prchk_guard": false, 00:16:56.405 "hdgst": false, 00:16:56.405 "ddgst": false 00:16:56.405 } 00:16:56.405 } 00:16:56.405 Got JSON-RPC error response 00:16:56.405 GoRPCClient: error on JSON-RPC call 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.405 2024/07/15 13:01:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:56.405 request: 00:16:56.405 { 00:16:56.405 "method": "bdev_nvme_attach_controller", 00:16:56.405 "params": { 00:16:56.405 "name": "NVMe0", 00:16:56.405 "trtype": "tcp", 00:16:56.405 "traddr": "10.0.0.2", 00:16:56.405 "adrfam": "ipv4", 00:16:56.405 "trsvcid": "4420", 00:16:56.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.405 "hostaddr": "10.0.0.2", 00:16:56.405 "hostsvcid": "60000", 00:16:56.405 "prchk_reftag": false, 00:16:56.405 "prchk_guard": false, 00:16:56.405 "hdgst": false, 00:16:56.405 "ddgst": false, 00:16:56.405 "multipath": "disable" 00:16:56.405 } 00:16:56.405 } 00:16:56.405 Got JSON-RPC error response 00:16:56.405 GoRPCClient: error on JSON-RPC call 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.405 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.406 2024/07/15 13:01:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:56.406 request: 00:16:56.406 { 00:16:56.406 "method": "bdev_nvme_attach_controller", 00:16:56.406 "params": { 00:16:56.406 "name": "NVMe0", 00:16:56.406 "trtype": "tcp", 00:16:56.406 "traddr": "10.0.0.2", 00:16:56.406 "adrfam": "ipv4", 00:16:56.406 "trsvcid": "4420", 00:16:56.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.406 "hostaddr": "10.0.0.2", 00:16:56.406 "hostsvcid": "60000", 00:16:56.406 "prchk_reftag": false, 00:16:56.406 "prchk_guard": false, 00:16:56.406 "hdgst": false, 00:16:56.406 "ddgst": false, 00:16:56.406 "multipath": "failover" 00:16:56.406 } 00:16:56.406 } 00:16:56.406 Got JSON-RPC error response 00:16:56.406 GoRPCClient: error on JSON-RPC call 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.406 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.406 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:56.406 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:56.664 13:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.664 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:56.664 13:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.596 0 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 86053 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86053 ']' 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86053 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86053 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.596 killing process with pid 86053 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86053' 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86053 00:16:57.596 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86053 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:57.854 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:16:57.854 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:57.854 [2024-07-15 13:01:08.260570] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:57.854 [2024-07-15 13:01:08.260692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86053 ] 00:16:57.854 [2024-07-15 13:01:08.394324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.854 [2024-07-15 13:01:08.461071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.854 [2024-07-15 13:01:08.859100] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name a55b61a0-6805-485f-a267-347bd53bb2fa already exists 00:16:57.854 [2024-07-15 13:01:08.859179] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:a55b61a0-6805-485f-a267-347bd53bb2fa alias for bdev NVMe1n1 00:16:57.854 [2024-07-15 13:01:08.859200] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:57.854 Running I/O for 1 seconds... 00:16:57.854 00:16:57.854 Latency(us) 00:16:57.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.854 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:57.854 NVMe0n1 : 1.01 16426.67 64.17 0.00 0.00 7778.36 2174.60 20614.05 00:16:57.854 =================================================================================================================== 00:16:57.854 Total : 16426.67 64.17 0.00 0.00 7778.36 2174.60 20614.05 00:16:57.854 Received shutdown signal, test time was about 1.000000 seconds 00:16:57.854 00:16:57.854 Latency(us) 00:16:57.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.855 =================================================================================================================== 00:16:57.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.855 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:57.855 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:57.855 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:57.855 13:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:57.855 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # nvmfcleanup 00:16:57.855 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:16:57.855 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.855 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:16:57.855 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.855 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.855 rmmod nvme_tcp 00:16:58.113 rmmod nvme_fabrics 00:16:58.113 rmmod nvme_keyring 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@493 -- # '[' -n 86001 ']' 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@494 -- # killprocess 86001 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86001 ']' 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86001 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86001 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:58.113 killing process with pid 86001 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86001' 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86001 00:16:58.113 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86001 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@282 -- # remove_spdk_ns 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:16:58.371 00:16:58.371 real 0m4.199s 00:16:58.371 user 0m12.670s 00:16:58.371 sys 0m0.976s 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:58.371 13:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:58.371 ************************************ 00:16:58.371 END TEST nvmf_multicontroller 00:16:58.371 ************************************ 00:16:58.371 13:01:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:58.371 13:01:10 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:58.371 13:01:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:58.371 13:01:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.371 13:01:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:58.371 ************************************ 00:16:58.371 START TEST nvmf_aer 00:16:58.371 ************************************ 00:16:58.371 13:01:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:58.371 * Looking for test storage... 00:16:58.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:58.371 13:01:10 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:58.371 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:58.371 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.372 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@452 -- # prepare_net_devs 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # local -g is_hw=no 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # remove_spdk_ns 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@436 -- # nvmf_veth_init 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:16:58.372 Cannot find device "nvmf_tgt_br" 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:16:58.372 Cannot find device "nvmf_tgt_br2" 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # true 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:16:58.372 Cannot find device "nvmf_tgt_br" 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:16:58.372 Cannot find device "nvmf_tgt_br2" 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:58.372 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:58.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # true 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:58.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@167 -- # true 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:58.631 13:01:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:58.631 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:16:58.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:58.968 00:16:58.968 --- 10.0.0.2 ping statistics --- 00:16:58.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.968 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:16:58.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:58.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:58.968 00:16:58.968 --- 10.0.0.3 ping statistics --- 00:16:58.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.968 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:58.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:58.968 00:16:58.968 --- 10.0.0.1 ping statistics --- 00:16:58.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.968 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@437 -- # return 0 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@485 -- # nvmfpid=86287 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@486 -- # waitforlisten 86287 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86287 ']' 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.968 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:58.968 [2024-07-15 13:01:11.188109] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:16:58.968 [2024-07-15 13:01:11.188204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.968 [2024-07-15 13:01:11.330579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.968 [2024-07-15 13:01:11.403903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.968 [2024-07-15 13:01:11.403967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.968 [2024-07-15 13:01:11.403979] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.968 [2024-07-15 13:01:11.403988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.968 [2024-07-15 13:01:11.403995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.968 [2024-07-15 13:01:11.404274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.968 [2024-07-15 13:01:11.404727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.968 [2024-07-15 13:01:11.404893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.968 [2024-07-15 13:01:11.404901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.226 [2024-07-15 13:01:11.521412] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.226 Malloc0 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.226 [2024-07-15 13:01:11.574774] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:59.226 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.227 [ 00:16:59.227 { 00:16:59.227 "allow_any_host": true, 00:16:59.227 "hosts": [], 00:16:59.227 "listen_addresses": [], 00:16:59.227 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:59.227 "subtype": "Discovery" 00:16:59.227 }, 00:16:59.227 { 00:16:59.227 "allow_any_host": true, 00:16:59.227 "hosts": [], 00:16:59.227 "listen_addresses": [ 00:16:59.227 { 00:16:59.227 "adrfam": "IPv4", 00:16:59.227 "traddr": "10.0.0.2", 00:16:59.227 "trsvcid": "4420", 00:16:59.227 "trtype": "TCP" 00:16:59.227 } 00:16:59.227 ], 00:16:59.227 "max_cntlid": 65519, 00:16:59.227 "max_namespaces": 2, 00:16:59.227 "min_cntlid": 1, 00:16:59.227 "model_number": "SPDK bdev Controller", 00:16:59.227 "namespaces": [ 00:16:59.227 { 00:16:59.227 "bdev_name": "Malloc0", 00:16:59.227 "name": "Malloc0", 00:16:59.227 "nguid": "3F11A2EF75704F43A51F8F85EBDE2BF4", 00:16:59.227 "nsid": 1, 00:16:59.227 "uuid": "3f11a2ef-7570-4f43-a51f-8f85ebde2bf4" 00:16:59.227 } 00:16:59.227 ], 00:16:59.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.227 "serial_number": "SPDK00000000000001", 00:16:59.227 "subtype": "NVMe" 00:16:59.227 } 00:16:59.227 ] 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86323 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:16:59.227 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.486 Malloc1 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.486 [ 00:16:59.486 { 00:16:59.486 "allow_any_host": true, 00:16:59.486 "hosts": [], 00:16:59.486 "listen_addresses": [], 00:16:59.486 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:59.486 "subtype": "Discovery" 00:16:59.486 }, 00:16:59.486 { 00:16:59.486 "allow_any_host": true, 00:16:59.486 "hosts": [], 00:16:59.486 "listen_addresses": [ 00:16:59.486 { 00:16:59.486 "adrfam": "IPv4", 00:16:59.486 "traddr": "10.0.0.2", 00:16:59.486 "trsvcid": "4420", 00:16:59.486 "trtype": "TCP" 00:16:59.486 } 00:16:59.486 ], 00:16:59.486 "max_cntlid": 65519, 00:16:59.486 "max_namespaces": 2, 00:16:59.486 "min_cntlid": 1, 00:16:59.486 "model_number": "SPDK bdev Controller", 00:16:59.486 "namespaces": [ 00:16:59.486 { 00:16:59.486 "bdev_name": "Malloc0", 00:16:59.486 "name": "Malloc0", 00:16:59.486 "nguid": "3F11A2EF75704F43A51F8F85EBDE2BF4", 00:16:59.486 "nsid": 1, 00:16:59.486 "uuid": "3f11a2ef-7570-4f43-a51f-8f85ebde2bf4" 00:16:59.486 }, 00:16:59.486 { 00:16:59.486 "bdev_name": "Malloc1", 00:16:59.486 "name": "Malloc1", 00:16:59.486 "nguid": "04F5076E1B964D9DB2C5BBE9A7B073AD", 00:16:59.486 Asynchronous Event Request test 00:16:59.486 Attaching to 10.0.0.2 00:16:59.486 Attached to 10.0.0.2 00:16:59.486 Registering asynchronous event callbacks... 00:16:59.486 Starting namespace attribute notice tests for all controllers... 00:16:59.486 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:59.486 aer_cb - Changed Namespace 00:16:59.486 Cleaning up... 00:16:59.486 "nsid": 2, 00:16:59.486 "uuid": "04f5076e-1b96-4d9d-b2c5-bbe9a7b073ad" 00:16:59.486 } 00:16:59.486 ], 00:16:59.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.486 "serial_number": "SPDK00000000000001", 00:16:59.486 "subtype": "NVMe" 00:16:59.486 } 00:16:59.486 ] 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86323 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # nvmfcleanup 00:16:59.486 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:16:59.742 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.742 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:16:59.742 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.742 13:01:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.742 rmmod nvme_tcp 00:16:59.742 rmmod nvme_fabrics 00:16:59.742 rmmod nvme_keyring 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@493 -- # '[' -n 86287 ']' 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@494 -- # killprocess 86287 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86287 ']' 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86287 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86287 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:59.742 killing process with pid 86287 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86287' 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86287 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86287 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@282 -- # remove_spdk_ns 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.742 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.743 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.000 13:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:17:00.000 00:17:00.000 real 0m1.564s 00:17:00.000 user 0m3.251s 00:17:00.000 sys 0m0.534s 00:17:00.000 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.000 13:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:00.000 ************************************ 00:17:00.000 END TEST nvmf_aer 00:17:00.000 ************************************ 00:17:00.000 13:01:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:00.000 13:01:12 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:00.000 13:01:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.000 13:01:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.000 13:01:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.000 ************************************ 00:17:00.000 START TEST nvmf_async_init 00:17:00.000 ************************************ 00:17:00.000 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:00.000 * Looking for test storage... 00:17:00.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:00.000 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:00.000 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:00.000 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.000 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.000 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.000 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.001 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=86718cb580d2491c92b47b5384f30857 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@452 -- # prepare_net_devs 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # local -g is_hw=no 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # remove_spdk_ns 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@436 -- # nvmf_veth_init 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:17:00.001 Cannot find device "nvmf_tgt_br" 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.001 Cannot find device "nvmf_tgt_br2" 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # true 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:17:00.001 Cannot find device "nvmf_tgt_br" 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:17:00.001 Cannot find device "nvmf_tgt_br2" 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:17:00.001 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:00.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:00.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:17:00.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:00.257 00:17:00.257 --- 10.0.0.2 ping statistics --- 00:17:00.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.257 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:00.257 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:17:00.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:00.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:17:00.257 00:17:00.257 --- 10.0.0.3 ping statistics --- 00:17:00.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.257 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:00.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:00.513 00:17:00.513 --- 10.0.0.1 ping statistics --- 00:17:00.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.513 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@437 -- # return 0 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@485 -- # nvmfpid=86495 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@486 -- # waitforlisten 86495 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86495 ']' 00:17:00.513 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.514 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.514 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.514 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.514 13:01:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.514 [2024-07-15 13:01:12.835588] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:17:00.514 [2024-07-15 13:01:12.835720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.514 [2024-07-15 13:01:12.978631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.770 [2024-07-15 13:01:13.039153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.770 [2024-07-15 13:01:13.039222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.770 [2024-07-15 13:01:13.039235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.770 [2024-07-15 13:01:13.039244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.770 [2024-07-15 13:01:13.039251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.770 [2024-07-15 13:01:13.039281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 [2024-07-15 13:01:13.151212] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 null0 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 86718cb580d2491c92b47b5384f30857 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 [2024-07-15 13:01:13.191318] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.770 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.027 nvme0n1 00:17:01.027 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.027 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:01.027 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.027 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.027 [ 00:17:01.027 { 00:17:01.027 "aliases": [ 00:17:01.027 "86718cb5-80d2-491c-92b4-7b5384f30857" 00:17:01.027 ], 00:17:01.027 "assigned_rate_limits": { 00:17:01.027 "r_mbytes_per_sec": 0, 00:17:01.027 "rw_ios_per_sec": 0, 00:17:01.027 "rw_mbytes_per_sec": 0, 00:17:01.027 "w_mbytes_per_sec": 0 00:17:01.027 }, 00:17:01.027 "block_size": 512, 00:17:01.027 "claimed": false, 00:17:01.027 "driver_specific": { 00:17:01.027 "mp_policy": "active_passive", 00:17:01.027 "nvme": [ 00:17:01.027 { 00:17:01.027 "ctrlr_data": { 00:17:01.027 "ana_reporting": false, 00:17:01.027 "cntlid": 1, 00:17:01.027 "firmware_revision": "24.09", 00:17:01.027 "model_number": "SPDK bdev Controller", 00:17:01.027 "multi_ctrlr": true, 00:17:01.027 "oacs": { 00:17:01.027 "firmware": 0, 00:17:01.027 "format": 0, 00:17:01.027 "ns_manage": 0, 00:17:01.027 "security": 0 00:17:01.027 }, 00:17:01.027 "serial_number": "00000000000000000000", 00:17:01.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.027 "vendor_id": "0x8086" 00:17:01.027 }, 00:17:01.027 "ns_data": { 00:17:01.027 "can_share": true, 00:17:01.027 "id": 1 00:17:01.027 }, 00:17:01.027 "trid": { 00:17:01.027 "adrfam": "IPv4", 00:17:01.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.027 "traddr": "10.0.0.2", 00:17:01.027 "trsvcid": "4420", 00:17:01.027 "trtype": "TCP" 00:17:01.027 }, 00:17:01.027 "vs": { 00:17:01.027 "nvme_version": "1.3" 00:17:01.027 } 00:17:01.027 } 00:17:01.027 ] 00:17:01.027 }, 00:17:01.027 "memory_domains": [ 00:17:01.027 { 00:17:01.027 "dma_device_id": "system", 00:17:01.027 "dma_device_type": 1 00:17:01.027 } 00:17:01.027 ], 00:17:01.027 "name": "nvme0n1", 00:17:01.027 "num_blocks": 2097152, 00:17:01.027 "product_name": "NVMe disk", 00:17:01.027 "supported_io_types": { 00:17:01.027 "abort": true, 00:17:01.027 "compare": true, 00:17:01.027 "compare_and_write": true, 00:17:01.027 "copy": true, 00:17:01.027 "flush": true, 00:17:01.027 "get_zone_info": false, 00:17:01.027 "nvme_admin": true, 00:17:01.027 "nvme_io": true, 00:17:01.027 "nvme_io_md": false, 00:17:01.027 "nvme_iov_md": false, 00:17:01.027 "read": true, 00:17:01.027 "reset": true, 00:17:01.027 "seek_data": false, 00:17:01.027 "seek_hole": false, 00:17:01.027 "unmap": false, 00:17:01.027 "write": true, 00:17:01.027 "write_zeroes": true, 00:17:01.027 "zcopy": false, 00:17:01.027 "zone_append": false, 00:17:01.027 "zone_management": false 00:17:01.027 }, 00:17:01.027 "uuid": "86718cb5-80d2-491c-92b4-7b5384f30857", 00:17:01.027 "zoned": false 00:17:01.027 } 00:17:01.027 ] 00:17:01.027 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.027 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:01.027 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.027 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.027 [2024-07-15 13:01:13.453043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:01.027 [2024-07-15 13:01:13.453175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1312b70 (9): Bad file descriptor 00:17:01.285 [2024-07-15 13:01:13.594978] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.285 [ 00:17:01.285 { 00:17:01.285 "aliases": [ 00:17:01.285 "86718cb5-80d2-491c-92b4-7b5384f30857" 00:17:01.285 ], 00:17:01.285 "assigned_rate_limits": { 00:17:01.285 "r_mbytes_per_sec": 0, 00:17:01.285 "rw_ios_per_sec": 0, 00:17:01.285 "rw_mbytes_per_sec": 0, 00:17:01.285 "w_mbytes_per_sec": 0 00:17:01.285 }, 00:17:01.285 "block_size": 512, 00:17:01.285 "claimed": false, 00:17:01.285 "driver_specific": { 00:17:01.285 "mp_policy": "active_passive", 00:17:01.285 "nvme": [ 00:17:01.285 { 00:17:01.285 "ctrlr_data": { 00:17:01.285 "ana_reporting": false, 00:17:01.285 "cntlid": 2, 00:17:01.285 "firmware_revision": "24.09", 00:17:01.285 "model_number": "SPDK bdev Controller", 00:17:01.285 "multi_ctrlr": true, 00:17:01.285 "oacs": { 00:17:01.285 "firmware": 0, 00:17:01.285 "format": 0, 00:17:01.285 "ns_manage": 0, 00:17:01.285 "security": 0 00:17:01.285 }, 00:17:01.285 "serial_number": "00000000000000000000", 00:17:01.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.285 "vendor_id": "0x8086" 00:17:01.285 }, 00:17:01.285 "ns_data": { 00:17:01.285 "can_share": true, 00:17:01.285 "id": 1 00:17:01.285 }, 00:17:01.285 "trid": { 00:17:01.285 "adrfam": "IPv4", 00:17:01.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.285 "traddr": "10.0.0.2", 00:17:01.285 "trsvcid": "4420", 00:17:01.285 "trtype": "TCP" 00:17:01.285 }, 00:17:01.285 "vs": { 00:17:01.285 "nvme_version": "1.3" 00:17:01.285 } 00:17:01.285 } 00:17:01.285 ] 00:17:01.285 }, 00:17:01.285 "memory_domains": [ 00:17:01.285 { 00:17:01.285 "dma_device_id": "system", 00:17:01.285 "dma_device_type": 1 00:17:01.285 } 00:17:01.285 ], 00:17:01.285 "name": "nvme0n1", 00:17:01.285 "num_blocks": 2097152, 00:17:01.285 "product_name": "NVMe disk", 00:17:01.285 "supported_io_types": { 00:17:01.285 "abort": true, 00:17:01.285 "compare": true, 00:17:01.285 "compare_and_write": true, 00:17:01.285 "copy": true, 00:17:01.285 "flush": true, 00:17:01.285 "get_zone_info": false, 00:17:01.285 "nvme_admin": true, 00:17:01.285 "nvme_io": true, 00:17:01.285 "nvme_io_md": false, 00:17:01.285 "nvme_iov_md": false, 00:17:01.285 "read": true, 00:17:01.285 "reset": true, 00:17:01.285 "seek_data": false, 00:17:01.285 "seek_hole": false, 00:17:01.285 "unmap": false, 00:17:01.285 "write": true, 00:17:01.285 "write_zeroes": true, 00:17:01.285 "zcopy": false, 00:17:01.285 "zone_append": false, 00:17:01.285 "zone_management": false 00:17:01.285 }, 00:17:01.285 "uuid": "86718cb5-80d2-491c-92b4-7b5384f30857", 00:17:01.285 "zoned": false 00:17:01.285 } 00:17:01.285 ] 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0DlqWgFj6k 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0DlqWgFj6k 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.285 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.286 [2024-07-15 13:01:13.653219] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:01.286 [2024-07-15 13:01:13.653412] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0DlqWgFj6k 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.286 [2024-07-15 13:01:13.661211] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0DlqWgFj6k 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.286 [2024-07-15 13:01:13.669212] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.286 [2024-07-15 13:01:13.669293] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:01.286 nvme0n1 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.286 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.286 [ 00:17:01.286 { 00:17:01.286 "aliases": [ 00:17:01.286 "86718cb5-80d2-491c-92b4-7b5384f30857" 00:17:01.286 ], 00:17:01.286 "assigned_rate_limits": { 00:17:01.286 "r_mbytes_per_sec": 0, 00:17:01.286 "rw_ios_per_sec": 0, 00:17:01.286 "rw_mbytes_per_sec": 0, 00:17:01.286 "w_mbytes_per_sec": 0 00:17:01.286 }, 00:17:01.286 "block_size": 512, 00:17:01.286 "claimed": false, 00:17:01.286 "driver_specific": { 00:17:01.286 "mp_policy": "active_passive", 00:17:01.286 "nvme": [ 00:17:01.286 { 00:17:01.286 "ctrlr_data": { 00:17:01.286 "ana_reporting": false, 00:17:01.286 "cntlid": 3, 00:17:01.286 "firmware_revision": "24.09", 00:17:01.286 "model_number": "SPDK bdev Controller", 00:17:01.286 "multi_ctrlr": true, 00:17:01.286 "oacs": { 00:17:01.286 "firmware": 0, 00:17:01.286 "format": 0, 00:17:01.286 "ns_manage": 0, 00:17:01.286 "security": 0 00:17:01.286 }, 00:17:01.286 "serial_number": "00000000000000000000", 00:17:01.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.286 "vendor_id": "0x8086" 00:17:01.286 }, 00:17:01.286 "ns_data": { 00:17:01.286 "can_share": true, 00:17:01.286 "id": 1 00:17:01.286 }, 00:17:01.286 "trid": { 00:17:01.286 "adrfam": "IPv4", 00:17:01.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.286 "traddr": "10.0.0.2", 00:17:01.286 "trsvcid": "4421", 00:17:01.286 "trtype": "TCP" 00:17:01.286 }, 00:17:01.286 "vs": { 00:17:01.286 "nvme_version": "1.3" 00:17:01.544 } 00:17:01.544 } 00:17:01.544 ] 00:17:01.544 }, 00:17:01.544 "memory_domains": [ 00:17:01.544 { 00:17:01.544 "dma_device_id": "system", 00:17:01.544 "dma_device_type": 1 00:17:01.544 } 00:17:01.544 ], 00:17:01.544 "name": "nvme0n1", 00:17:01.544 "num_blocks": 2097152, 00:17:01.544 "product_name": "NVMe disk", 00:17:01.544 "supported_io_types": { 00:17:01.544 "abort": true, 00:17:01.544 "compare": true, 00:17:01.544 "compare_and_write": true, 00:17:01.544 "copy": true, 00:17:01.544 "flush": true, 00:17:01.544 "get_zone_info": false, 00:17:01.544 "nvme_admin": true, 00:17:01.544 "nvme_io": true, 00:17:01.544 "nvme_io_md": false, 00:17:01.544 "nvme_iov_md": false, 00:17:01.544 "read": true, 00:17:01.544 "reset": true, 00:17:01.544 "seek_data": false, 00:17:01.544 "seek_hole": false, 00:17:01.544 "unmap": false, 00:17:01.544 "write": true, 00:17:01.544 "write_zeroes": true, 00:17:01.544 "zcopy": false, 00:17:01.544 "zone_append": false, 00:17:01.544 "zone_management": false 00:17:01.544 }, 00:17:01.544 "uuid": "86718cb5-80d2-491c-92b4-7b5384f30857", 00:17:01.544 "zoned": false 00:17:01.544 } 00:17:01.544 ] 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.0DlqWgFj6k 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # nvmfcleanup 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.544 rmmod nvme_tcp 00:17:01.544 rmmod nvme_fabrics 00:17:01.544 rmmod nvme_keyring 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@493 -- # '[' -n 86495 ']' 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@494 -- # killprocess 86495 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86495 ']' 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86495 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86495 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:01.544 killing process with pid 86495 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86495' 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86495 00:17:01.544 [2024-07-15 13:01:13.894122] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:01.544 [2024-07-15 13:01:13.894174] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:01.544 13:01:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86495 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@282 -- # remove_spdk_ns 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:17:01.802 00:17:01.802 real 0m1.823s 00:17:01.802 user 0m1.474s 00:17:01.802 sys 0m0.499s 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.802 13:01:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:01.802 ************************************ 00:17:01.802 END TEST nvmf_async_init 00:17:01.802 ************************************ 00:17:01.802 13:01:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:01.802 13:01:14 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:01.802 13:01:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:01.802 13:01:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.802 13:01:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.802 ************************************ 00:17:01.802 START TEST dma 00:17:01.802 ************************************ 00:17:01.802 13:01:14 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:01.802 * Looking for test storage... 00:17:01.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.802 13:01:14 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.802 13:01:14 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.802 13:01:14 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.802 13:01:14 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.802 13:01:14 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.802 13:01:14 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.802 13:01:14 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.802 13:01:14 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:17:01.802 13:01:14 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@51 -- # : 0 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.802 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.802 13:01:14 nvmf_tcp.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.802 13:01:14 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:01.802 13:01:14 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:17:01.802 00:17:01.802 real 0m0.092s 00:17:01.802 user 0m0.050s 00:17:01.802 sys 0m0.047s 00:17:01.802 13:01:14 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.802 ************************************ 00:17:01.802 13:01:14 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:17:01.802 END TEST dma 00:17:01.802 ************************************ 00:17:02.059 13:01:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:02.059 13:01:14 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:02.059 13:01:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:02.059 13:01:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.059 13:01:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.059 ************************************ 00:17:02.059 START TEST nvmf_identify 00:17:02.059 ************************************ 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:02.059 * Looking for test storage... 00:17:02.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.059 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@452 -- # prepare_net_devs 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # local -g is_hw=no 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # remove_spdk_ns 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@436 -- # nvmf_veth_init 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:17:02.059 Cannot find device "nvmf_tgt_br" 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.059 Cannot find device "nvmf_tgt_br2" 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # true 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:17:02.059 Cannot find device "nvmf_tgt_br" 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:17:02.059 Cannot find device "nvmf_tgt_br2" 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:02.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:02.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.059 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:17:02.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:17:02.317 00:17:02.317 --- 10.0.0.2 ping statistics --- 00:17:02.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.317 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:17:02.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:02.317 00:17:02.317 --- 10.0.0.3 ping statistics --- 00:17:02.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.317 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:02.317 00:17:02.317 --- 10.0.0.1 ping statistics --- 00:17:02.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.317 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@437 -- # return 0 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86751 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86751 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86751 ']' 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.317 13:01:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:02.575 [2024-07-15 13:01:14.799445] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:17:02.575 [2024-07-15 13:01:14.799578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.575 [2024-07-15 13:01:14.944810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.575 [2024-07-15 13:01:15.007553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.575 [2024-07-15 13:01:15.007615] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.575 [2024-07-15 13:01:15.007633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.575 [2024-07-15 13:01:15.007646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.575 [2024-07-15 13:01:15.007657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.575 [2024-07-15 13:01:15.007785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.575 [2024-07-15 13:01:15.007855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.575 [2024-07-15 13:01:15.008455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.575 [2024-07-15 13:01:15.008472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.509 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.509 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 [2024-07-15 13:01:15.885962] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 Malloc0 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.510 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.769 [2024-07-15 13:01:15.980218] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.769 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.769 13:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:03.769 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.769 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.769 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.769 13:01:15 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:03.769 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.769 13:01:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.769 [ 00:17:03.769 { 00:17:03.769 "allow_any_host": true, 00:17:03.769 "hosts": [], 00:17:03.769 "listen_addresses": [ 00:17:03.769 { 00:17:03.769 "adrfam": "IPv4", 00:17:03.769 "traddr": "10.0.0.2", 00:17:03.769 "trsvcid": "4420", 00:17:03.769 "trtype": "TCP" 00:17:03.769 } 00:17:03.769 ], 00:17:03.769 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:03.769 "subtype": "Discovery" 00:17:03.769 }, 00:17:03.769 { 00:17:03.769 "allow_any_host": true, 00:17:03.769 "hosts": [], 00:17:03.769 "listen_addresses": [ 00:17:03.769 { 00:17:03.769 "adrfam": "IPv4", 00:17:03.769 "traddr": "10.0.0.2", 00:17:03.769 "trsvcid": "4420", 00:17:03.769 "trtype": "TCP" 00:17:03.769 } 00:17:03.769 ], 00:17:03.769 "max_cntlid": 65519, 00:17:03.769 "max_namespaces": 32, 00:17:03.769 "min_cntlid": 1, 00:17:03.769 "model_number": "SPDK bdev Controller", 00:17:03.769 "namespaces": [ 00:17:03.769 { 00:17:03.769 "bdev_name": "Malloc0", 00:17:03.769 "eui64": "ABCDEF0123456789", 00:17:03.769 "name": "Malloc0", 00:17:03.769 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:03.769 "nsid": 1, 00:17:03.769 "uuid": "36d60d29-bdc8-41d7-9bf9-f54fd1c249ad" 00:17:03.769 } 00:17:03.769 ], 00:17:03.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.769 "serial_number": "SPDK00000000000001", 00:17:03.769 "subtype": "NVMe" 00:17:03.769 } 00:17:03.769 ] 00:17:03.769 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.769 13:01:16 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:03.769 [2024-07-15 13:01:16.034373] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:17:03.769 [2024-07-15 13:01:16.034457] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86810 ] 00:17:03.769 [2024-07-15 13:01:16.183633] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:03.769 [2024-07-15 13:01:16.183736] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:03.769 [2024-07-15 13:01:16.183746] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:03.769 [2024-07-15 13:01:16.183780] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:03.769 [2024-07-15 13:01:16.183791] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:03.770 [2024-07-15 13:01:16.184002] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:03.770 [2024-07-15 13:01:16.184080] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xce5a60 0 00:17:03.770 [2024-07-15 13:01:16.190818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:03.770 [2024-07-15 13:01:16.190872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:03.770 [2024-07-15 13:01:16.190884] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:03.770 [2024-07-15 13:01:16.190891] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:03.770 [2024-07-15 13:01:16.190965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.190978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.190985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.191009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:03.770 [2024-07-15 13:01:16.191063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.198838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.198897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.770 [2024-07-15 13:01:16.198906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.198912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:03.770 [2024-07-15 13:01:16.198927] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:03.770 [2024-07-15 13:01:16.198941] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:03.770 [2024-07-15 13:01:16.198955] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:03.770 [2024-07-15 13:01:16.198987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.198994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.198999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.199013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.770 [2024-07-15 13:01:16.199065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.199227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.199237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.770 [2024-07-15 13:01:16.199241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:03.770 [2024-07-15 13:01:16.199252] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:03.770 [2024-07-15 13:01:16.199261] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:03.770 [2024-07-15 13:01:16.199270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.199287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.770 [2024-07-15 13:01:16.199312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.199378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.199385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.770 [2024-07-15 13:01:16.199389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:03.770 [2024-07-15 13:01:16.199400] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:03.770 [2024-07-15 13:01:16.199409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:03.770 [2024-07-15 13:01:16.199417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.199433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.770 [2024-07-15 13:01:16.199452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.199523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.199537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.770 [2024-07-15 13:01:16.199542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:03.770 [2024-07-15 13:01:16.199553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:03.770 [2024-07-15 13:01:16.199565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.199583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.770 [2024-07-15 13:01:16.199607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.199666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.199679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.770 [2024-07-15 13:01:16.199683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:03.770 [2024-07-15 13:01:16.199693] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:03.770 [2024-07-15 13:01:16.199699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:03.770 [2024-07-15 13:01:16.199707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:03.770 [2024-07-15 13:01:16.199814] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:03.770 [2024-07-15 13:01:16.199822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:03.770 [2024-07-15 13:01:16.199837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.199860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.770 [2024-07-15 13:01:16.199893] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.199971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.199979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.770 [2024-07-15 13:01:16.199983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.199987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:03.770 [2024-07-15 13:01:16.199993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:03.770 [2024-07-15 13:01:16.200004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.200020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.770 [2024-07-15 13:01:16.200041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.200117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.200142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.770 [2024-07-15 13:01:16.200150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:03.770 [2024-07-15 13:01:16.200160] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:03.770 [2024-07-15 13:01:16.200167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:03.770 [2024-07-15 13:01:16.200180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:03.770 [2024-07-15 13:01:16.200195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:03.770 [2024-07-15 13:01:16.200214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.200232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.770 [2024-07-15 13:01:16.200265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.200404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:03.770 [2024-07-15 13:01:16.200426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:03.770 [2024-07-15 13:01:16.200433] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200439] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce5a60): datao=0, datal=4096, cccid=0 00:17:03.770 [2024-07-15 13:01:16.200447] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd28840) on tqpair(0xce5a60): expected_datao=0, payload_size=4096 00:17:03.770 [2024-07-15 13:01:16.200455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200468] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200475] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.200499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.770 [2024-07-15 13:01:16.200506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:03.770 [2024-07-15 13:01:16.200527] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:03.770 [2024-07-15 13:01:16.200535] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:03.770 [2024-07-15 13:01:16.200543] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:03.770 [2024-07-15 13:01:16.200551] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:03.770 [2024-07-15 13:01:16.200558] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:03.770 [2024-07-15 13:01:16.200566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:03.770 [2024-07-15 13:01:16.200579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:03.770 [2024-07-15 13:01:16.200590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.200614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.770 [2024-07-15 13:01:16.200642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.200725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.200734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.770 [2024-07-15 13:01:16.200740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:03.770 [2024-07-15 13:01:16.200758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.200805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.770 [2024-07-15 13:01:16.200817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.200841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.770 [2024-07-15 13:01:16.200852] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200860] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.200867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.770 [2024-07-15 13:01:16.200873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.200888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.770 [2024-07-15 13:01:16.200893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:03.770 [2024-07-15 13:01:16.200910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:03.770 [2024-07-15 13:01:16.200919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.770 [2024-07-15 13:01:16.200923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce5a60) 00:17:03.770 [2024-07-15 13:01:16.200930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.770 [2024-07-15 13:01:16.200956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28840, cid 0, qid 0 00:17:03.770 [2024-07-15 13:01:16.200964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd289c0, cid 1, qid 0 00:17:03.770 [2024-07-15 13:01:16.200969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28b40, cid 2, qid 0 00:17:03.770 [2024-07-15 13:01:16.200974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:03.770 [2024-07-15 13:01:16.200979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28e40, cid 4, qid 0 00:17:03.770 [2024-07-15 13:01:16.201125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.770 [2024-07-15 13:01:16.201150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.771 [2024-07-15 13:01:16.201158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28e40) on tqpair=0xce5a60 00:17:03.771 [2024-07-15 13:01:16.201176] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:03.771 [2024-07-15 13:01:16.201191] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:03.771 [2024-07-15 13:01:16.201211] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce5a60) 00:17:03.771 [2024-07-15 13:01:16.201226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.771 [2024-07-15 13:01:16.201256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28e40, cid 4, qid 0 00:17:03.771 [2024-07-15 13:01:16.201362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:03.771 [2024-07-15 13:01:16.201376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:03.771 [2024-07-15 13:01:16.201382] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201388] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce5a60): datao=0, datal=4096, cccid=4 00:17:03.771 [2024-07-15 13:01:16.201395] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd28e40) on tqpair(0xce5a60): expected_datao=0, payload_size=4096 00:17:03.771 [2024-07-15 13:01:16.201402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201413] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201419] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.771 [2024-07-15 13:01:16.201439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.771 [2024-07-15 13:01:16.201445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28e40) on tqpair=0xce5a60 00:17:03.771 [2024-07-15 13:01:16.201471] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:03.771 [2024-07-15 13:01:16.201530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce5a60) 00:17:03.771 [2024-07-15 13:01:16.201554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.771 [2024-07-15 13:01:16.201567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce5a60) 00:17:03.771 [2024-07-15 13:01:16.201589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.771 [2024-07-15 13:01:16.201620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28e40, cid 4, qid 0 00:17:03.771 [2024-07-15 13:01:16.201628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28fc0, cid 5, qid 0 00:17:03.771 [2024-07-15 13:01:16.201799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:03.771 [2024-07-15 13:01:16.201821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:03.771 [2024-07-15 13:01:16.201828] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201834] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce5a60): datao=0, datal=1024, cccid=4 00:17:03.771 [2024-07-15 13:01:16.201841] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd28e40) on tqpair(0xce5a60): expected_datao=0, payload_size=1024 00:17:03.771 [2024-07-15 13:01:16.201849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201859] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201865] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:03.771 [2024-07-15 13:01:16.201882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:03.771 [2024-07-15 13:01:16.201888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:03.771 [2024-07-15 13:01:16.201895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28fc0) on tqpair=0xce5a60 00:17:04.030 [2024-07-15 13:01:16.245802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.030 [2024-07-15 13:01:16.245851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.030 [2024-07-15 13:01:16.245857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.030 [2024-07-15 13:01:16.245864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28e40) on tqpair=0xce5a60 00:17:04.030 [2024-07-15 13:01:16.245893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.030 [2024-07-15 13:01:16.245899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce5a60) 00:17:04.030 [2024-07-15 13:01:16.245914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.030 [2024-07-15 13:01:16.245953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28e40, cid 4, qid 0 00:17:04.030 [2024-07-15 13:01:16.246132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.030 [2024-07-15 13:01:16.246154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.030 [2024-07-15 13:01:16.246161] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.030 [2024-07-15 13:01:16.246168] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce5a60): datao=0, datal=3072, cccid=4 00:17:04.030 [2024-07-15 13:01:16.246176] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd28e40) on tqpair(0xce5a60): expected_datao=0, payload_size=3072 00:17:04.030 [2024-07-15 13:01:16.246184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.030 [2024-07-15 13:01:16.246205] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.030 [2024-07-15 13:01:16.246211] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.030 [2024-07-15 13:01:16.246225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.030 [2024-07-15 13:01:16.246236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.030 [2024-07-15 13:01:16.246241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.030 [2024-07-15 13:01:16.246245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28e40) on tqpair=0xce5a60 00:17:04.030 [2024-07-15 13:01:16.246260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.030 [2024-07-15 13:01:16.246268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce5a60) 00:17:04.030 [2024-07-15 13:01:16.246278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.030 [2024-07-15 13:01:16.246312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28e40, cid 4, qid 0 00:17:04.031 [2024-07-15 13:01:16.246427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.031 [2024-07-15 13:01:16.246441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.031 [2024-07-15 13:01:16.246448] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.031 [2024-07-15 13:01:16.246454] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce5a60): datao=0, datal=8, cccid=4 00:17:04.031 [2024-07-15 13:01:16.246462] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd28e40) on tqpair(0xce5a60): expected_datao=0, payload_size=8 00:17:04.031 [2024-07-15 13:01:16.246468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.031 [2024-07-15 13:01:16.246476] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.031 [2024-07-15 13:01:16.246481] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.031 [2024-07-15 13:01:16.286950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.031 [2024-07-15 13:01:16.286998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.031 [2024-07-15 13:01:16.287005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.031 [2024-07-15 13:01:16.287011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28e40) on tqpair=0xce5a60 00:17:04.031 ===================================================== 00:17:04.031 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:04.031 ===================================================== 00:17:04.031 Controller Capabilities/Features 00:17:04.031 ================================ 00:17:04.031 Vendor ID: 0000 00:17:04.031 Subsystem Vendor ID: 0000 00:17:04.031 Serial Number: .................... 00:17:04.031 Model Number: ........................................ 00:17:04.031 Firmware Version: 24.09 00:17:04.031 Recommended Arb Burst: 0 00:17:04.031 IEEE OUI Identifier: 00 00 00 00:17:04.031 Multi-path I/O 00:17:04.031 May have multiple subsystem ports: No 00:17:04.031 May have multiple controllers: No 00:17:04.031 Associated with SR-IOV VF: No 00:17:04.031 Max Data Transfer Size: 131072 00:17:04.031 Max Number of Namespaces: 0 00:17:04.031 Max Number of I/O Queues: 1024 00:17:04.031 NVMe Specification Version (VS): 1.3 00:17:04.031 NVMe Specification Version (Identify): 1.3 00:17:04.031 Maximum Queue Entries: 128 00:17:04.031 Contiguous Queues Required: Yes 00:17:04.031 Arbitration Mechanisms Supported 00:17:04.031 Weighted Round Robin: Not Supported 00:17:04.031 Vendor Specific: Not Supported 00:17:04.031 Reset Timeout: 15000 ms 00:17:04.031 Doorbell Stride: 4 bytes 00:17:04.031 NVM Subsystem Reset: Not Supported 00:17:04.031 Command Sets Supported 00:17:04.031 NVM Command Set: Supported 00:17:04.031 Boot Partition: Not Supported 00:17:04.031 Memory Page Size Minimum: 4096 bytes 00:17:04.031 Memory Page Size Maximum: 4096 bytes 00:17:04.031 Persistent Memory Region: Not Supported 00:17:04.031 Optional Asynchronous Events Supported 00:17:04.031 Namespace Attribute Notices: Not Supported 00:17:04.031 Firmware Activation Notices: Not Supported 00:17:04.031 ANA Change Notices: Not Supported 00:17:04.031 PLE Aggregate Log Change Notices: Not Supported 00:17:04.031 LBA Status Info Alert Notices: Not Supported 00:17:04.031 EGE Aggregate Log Change Notices: Not Supported 00:17:04.031 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.031 Zone Descriptor Change Notices: Not Supported 00:17:04.031 Discovery Log Change Notices: Supported 00:17:04.031 Controller Attributes 00:17:04.031 128-bit Host Identifier: Not Supported 00:17:04.031 Non-Operational Permissive Mode: Not Supported 00:17:04.031 NVM Sets: Not Supported 00:17:04.031 Read Recovery Levels: Not Supported 00:17:04.031 Endurance Groups: Not Supported 00:17:04.031 Predictable Latency Mode: Not Supported 00:17:04.031 Traffic Based Keep ALive: Not Supported 00:17:04.031 Namespace Granularity: Not Supported 00:17:04.031 SQ Associations: Not Supported 00:17:04.031 UUID List: Not Supported 00:17:04.031 Multi-Domain Subsystem: Not Supported 00:17:04.031 Fixed Capacity Management: Not Supported 00:17:04.031 Variable Capacity Management: Not Supported 00:17:04.031 Delete Endurance Group: Not Supported 00:17:04.031 Delete NVM Set: Not Supported 00:17:04.031 Extended LBA Formats Supported: Not Supported 00:17:04.031 Flexible Data Placement Supported: Not Supported 00:17:04.031 00:17:04.031 Controller Memory Buffer Support 00:17:04.031 ================================ 00:17:04.031 Supported: No 00:17:04.031 00:17:04.031 Persistent Memory Region Support 00:17:04.031 ================================ 00:17:04.031 Supported: No 00:17:04.031 00:17:04.031 Admin Command Set Attributes 00:17:04.031 ============================ 00:17:04.031 Security Send/Receive: Not Supported 00:17:04.031 Format NVM: Not Supported 00:17:04.031 Firmware Activate/Download: Not Supported 00:17:04.031 Namespace Management: Not Supported 00:17:04.031 Device Self-Test: Not Supported 00:17:04.031 Directives: Not Supported 00:17:04.031 NVMe-MI: Not Supported 00:17:04.031 Virtualization Management: Not Supported 00:17:04.031 Doorbell Buffer Config: Not Supported 00:17:04.031 Get LBA Status Capability: Not Supported 00:17:04.031 Command & Feature Lockdown Capability: Not Supported 00:17:04.031 Abort Command Limit: 1 00:17:04.031 Async Event Request Limit: 4 00:17:04.031 Number of Firmware Slots: N/A 00:17:04.031 Firmware Slot 1 Read-Only: N/A 00:17:04.031 Firmware Activation Without Reset: N/A 00:17:04.031 Multiple Update Detection Support: N/A 00:17:04.031 Firmware Update Granularity: No Information Provided 00:17:04.031 Per-Namespace SMART Log: No 00:17:04.031 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.031 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:04.031 Command Effects Log Page: Not Supported 00:17:04.031 Get Log Page Extended Data: Supported 00:17:04.031 Telemetry Log Pages: Not Supported 00:17:04.031 Persistent Event Log Pages: Not Supported 00:17:04.031 Supported Log Pages Log Page: May Support 00:17:04.031 Commands Supported & Effects Log Page: Not Supported 00:17:04.031 Feature Identifiers & Effects Log Page:May Support 00:17:04.031 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.031 Data Area 4 for Telemetry Log: Not Supported 00:17:04.031 Error Log Page Entries Supported: 128 00:17:04.032 Keep Alive: Not Supported 00:17:04.032 00:17:04.032 NVM Command Set Attributes 00:17:04.032 ========================== 00:17:04.032 Submission Queue Entry Size 00:17:04.032 Max: 1 00:17:04.032 Min: 1 00:17:04.032 Completion Queue Entry Size 00:17:04.032 Max: 1 00:17:04.032 Min: 1 00:17:04.032 Number of Namespaces: 0 00:17:04.032 Compare Command: Not Supported 00:17:04.032 Write Uncorrectable Command: Not Supported 00:17:04.032 Dataset Management Command: Not Supported 00:17:04.032 Write Zeroes Command: Not Supported 00:17:04.032 Set Features Save Field: Not Supported 00:17:04.032 Reservations: Not Supported 00:17:04.032 Timestamp: Not Supported 00:17:04.032 Copy: Not Supported 00:17:04.032 Volatile Write Cache: Not Present 00:17:04.032 Atomic Write Unit (Normal): 1 00:17:04.032 Atomic Write Unit (PFail): 1 00:17:04.032 Atomic Compare & Write Unit: 1 00:17:04.032 Fused Compare & Write: Supported 00:17:04.032 Scatter-Gather List 00:17:04.032 SGL Command Set: Supported 00:17:04.032 SGL Keyed: Supported 00:17:04.032 SGL Bit Bucket Descriptor: Not Supported 00:17:04.032 SGL Metadata Pointer: Not Supported 00:17:04.032 Oversized SGL: Not Supported 00:17:04.032 SGL Metadata Address: Not Supported 00:17:04.032 SGL Offset: Supported 00:17:04.032 Transport SGL Data Block: Not Supported 00:17:04.032 Replay Protected Memory Block: Not Supported 00:17:04.032 00:17:04.032 Firmware Slot Information 00:17:04.032 ========================= 00:17:04.032 Active slot: 0 00:17:04.032 00:17:04.032 00:17:04.032 Error Log 00:17:04.032 ========= 00:17:04.032 00:17:04.032 Active Namespaces 00:17:04.032 ================= 00:17:04.032 Discovery Log Page 00:17:04.032 ================== 00:17:04.032 Generation Counter: 2 00:17:04.032 Number of Records: 2 00:17:04.032 Record Format: 0 00:17:04.032 00:17:04.032 Discovery Log Entry 0 00:17:04.032 ---------------------- 00:17:04.032 Transport Type: 3 (TCP) 00:17:04.032 Address Family: 1 (IPv4) 00:17:04.032 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:04.032 Entry Flags: 00:17:04.032 Duplicate Returned Information: 1 00:17:04.032 Explicit Persistent Connection Support for Discovery: 1 00:17:04.032 Transport Requirements: 00:17:04.032 Secure Channel: Not Required 00:17:04.032 Port ID: 0 (0x0000) 00:17:04.032 Controller ID: 65535 (0xffff) 00:17:04.032 Admin Max SQ Size: 128 00:17:04.032 Transport Service Identifier: 4420 00:17:04.032 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:04.032 Transport Address: 10.0.0.2 00:17:04.032 Discovery Log Entry 1 00:17:04.032 ---------------------- 00:17:04.032 Transport Type: 3 (TCP) 00:17:04.032 Address Family: 1 (IPv4) 00:17:04.032 Subsystem Type: 2 (NVM Subsystem) 00:17:04.032 Entry Flags: 00:17:04.032 Duplicate Returned Information: 0 00:17:04.032 Explicit Persistent Connection Support for Discovery: 0 00:17:04.032 Transport Requirements: 00:17:04.032 Secure Channel: Not Required 00:17:04.032 Port ID: 0 (0x0000) 00:17:04.032 Controller ID: 65535 (0xffff) 00:17:04.032 Admin Max SQ Size: 128 00:17:04.032 Transport Service Identifier: 4420 00:17:04.032 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:04.032 Transport Address: 10.0.0.2 [2024-07-15 13:01:16.287194] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:04.032 [2024-07-15 13:01:16.287220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28840) on tqpair=0xce5a60 00:17:04.032 [2024-07-15 13:01:16.287236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.032 [2024-07-15 13:01:16.287247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd289c0) on tqpair=0xce5a60 00:17:04.032 [2024-07-15 13:01:16.287255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.032 [2024-07-15 13:01:16.287267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28b40) on tqpair=0xce5a60 00:17:04.032 [2024-07-15 13:01:16.287278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.032 [2024-07-15 13:01:16.287291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.032 [2024-07-15 13:01:16.287299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.032 [2024-07-15 13:01:16.287323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.032 [2024-07-15 13:01:16.287365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.032 [2024-07-15 13:01:16.287398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.032 [2024-07-15 13:01:16.287517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.032 [2024-07-15 13:01:16.287524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.032 [2024-07-15 13:01:16.287528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.032 [2024-07-15 13:01:16.287542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.032 [2024-07-15 13:01:16.287559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.032 [2024-07-15 13:01:16.287584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.032 [2024-07-15 13:01:16.287699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.032 [2024-07-15 13:01:16.287706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.032 [2024-07-15 13:01:16.287710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.032 [2024-07-15 13:01:16.287719] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:04.032 [2024-07-15 13:01:16.287725] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:04.032 [2024-07-15 13:01:16.287735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287740] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.032 [2024-07-15 13:01:16.287752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.032 [2024-07-15 13:01:16.287797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.032 [2024-07-15 13:01:16.287881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.032 [2024-07-15 13:01:16.287888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.032 [2024-07-15 13:01:16.287892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.032 [2024-07-15 13:01:16.287909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.032 [2024-07-15 13:01:16.287917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.032 [2024-07-15 13:01:16.287925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.032 [2024-07-15 13:01:16.287945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.288020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.288027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.288031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.033 [2024-07-15 13:01:16.288046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.033 [2024-07-15 13:01:16.288063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.033 [2024-07-15 13:01:16.288080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.288156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.288163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.288167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.033 [2024-07-15 13:01:16.288182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.033 [2024-07-15 13:01:16.288198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.033 [2024-07-15 13:01:16.288216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.288291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.288298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.288302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.033 [2024-07-15 13:01:16.288317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.033 [2024-07-15 13:01:16.288333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.033 [2024-07-15 13:01:16.288350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.288424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.288430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.288434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.033 [2024-07-15 13:01:16.288449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.033 [2024-07-15 13:01:16.288465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.033 [2024-07-15 13:01:16.288483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.288557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.288564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.288568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.033 [2024-07-15 13:01:16.288583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.033 [2024-07-15 13:01:16.288599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.033 [2024-07-15 13:01:16.288616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.288690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.288697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.288701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.033 [2024-07-15 13:01:16.288716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.033 [2024-07-15 13:01:16.288732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.033 [2024-07-15 13:01:16.288749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.288844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.288853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.288857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.033 [2024-07-15 13:01:16.288872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.033 [2024-07-15 13:01:16.288889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.033 [2024-07-15 13:01:16.288909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.288983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.288990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.288994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.288998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.033 [2024-07-15 13:01:16.289009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.289014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.289017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.033 [2024-07-15 13:01:16.289025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.033 [2024-07-15 13:01:16.289043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.289114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.289121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.289125] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.289129] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.033 [2024-07-15 13:01:16.289140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.289145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.033 [2024-07-15 13:01:16.289149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.033 [2024-07-15 13:01:16.289156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.033 [2024-07-15 13:01:16.289174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.033 [2024-07-15 13:01:16.289248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.033 [2024-07-15 13:01:16.289260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.033 [2024-07-15 13:01:16.289265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.034 [2024-07-15 13:01:16.289280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.034 [2024-07-15 13:01:16.289297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.034 [2024-07-15 13:01:16.289316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.034 [2024-07-15 13:01:16.289389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.034 [2024-07-15 13:01:16.289404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.034 [2024-07-15 13:01:16.289409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.034 [2024-07-15 13:01:16.289425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.034 [2024-07-15 13:01:16.289441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.034 [2024-07-15 13:01:16.289461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.034 [2024-07-15 13:01:16.289544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.034 [2024-07-15 13:01:16.289558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.034 [2024-07-15 13:01:16.289562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.034 [2024-07-15 13:01:16.289578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.034 [2024-07-15 13:01:16.289595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.034 [2024-07-15 13:01:16.289614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.034 [2024-07-15 13:01:16.289683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.034 [2024-07-15 13:01:16.289694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.034 [2024-07-15 13:01:16.289698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.034 [2024-07-15 13:01:16.289714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.289723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.034 [2024-07-15 13:01:16.289730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.034 [2024-07-15 13:01:16.289749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.034 [2024-07-15 13:01:16.293792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.034 [2024-07-15 13:01:16.293819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.034 [2024-07-15 13:01:16.293825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.293830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.034 [2024-07-15 13:01:16.293848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.293854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.293858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce5a60) 00:17:04.034 [2024-07-15 13:01:16.293868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.034 [2024-07-15 13:01:16.293898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd28cc0, cid 3, qid 0 00:17:04.034 [2024-07-15 13:01:16.294012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.034 [2024-07-15 13:01:16.294019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.034 [2024-07-15 13:01:16.294023] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.294027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd28cc0) on tqpair=0xce5a60 00:17:04.034 [2024-07-15 13:01:16.294036] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:17:04.034 00:17:04.034 13:01:16 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:04.034 [2024-07-15 13:01:16.336321] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:17:04.034 [2024-07-15 13:01:16.336401] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86813 ] 00:17:04.034 [2024-07-15 13:01:16.481302] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:04.034 [2024-07-15 13:01:16.481384] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:04.034 [2024-07-15 13:01:16.481393] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:04.034 [2024-07-15 13:01:16.481407] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:04.034 [2024-07-15 13:01:16.481416] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:04.034 [2024-07-15 13:01:16.481600] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:04.034 [2024-07-15 13:01:16.481653] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6d6a60 0 00:17:04.034 [2024-07-15 13:01:16.493790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:04.034 [2024-07-15 13:01:16.493830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:04.034 [2024-07-15 13:01:16.493847] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:04.034 [2024-07-15 13:01:16.493851] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:04.034 [2024-07-15 13:01:16.493905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.493913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.034 [2024-07-15 13:01:16.493918] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.034 [2024-07-15 13:01:16.493935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:04.034 [2024-07-15 13:01:16.493976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.294 [2024-07-15 13:01:16.501786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.294 [2024-07-15 13:01:16.501816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.294 [2024-07-15 13:01:16.501822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.294 [2024-07-15 13:01:16.501828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.294 [2024-07-15 13:01:16.501845] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:04.294 [2024-07-15 13:01:16.501857] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:04.294 [2024-07-15 13:01:16.501865] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:04.294 [2024-07-15 13:01:16.501888] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.294 [2024-07-15 13:01:16.501894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.294 [2024-07-15 13:01:16.501899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.294 [2024-07-15 13:01:16.501911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.294 [2024-07-15 13:01:16.501946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.294 [2024-07-15 13:01:16.502076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.294 [2024-07-15 13:01:16.502093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.294 [2024-07-15 13:01:16.502098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.294 [2024-07-15 13:01:16.502103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.294 [2024-07-15 13:01:16.502110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:04.294 [2024-07-15 13:01:16.502119] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:04.294 [2024-07-15 13:01:16.502128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.294 [2024-07-15 13:01:16.502133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.294 [2024-07-15 13:01:16.502137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.294 [2024-07-15 13:01:16.502146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.294 [2024-07-15 13:01:16.502168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.294 [2024-07-15 13:01:16.502255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.294 [2024-07-15 13:01:16.502270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.294 [2024-07-15 13:01:16.502275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.294 [2024-07-15 13:01:16.502280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.294 [2024-07-15 13:01:16.502287] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:04.294 [2024-07-15 13:01:16.502297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.294 [2024-07-15 13:01:16.502306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.294 [2024-07-15 13:01:16.502311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.294 [2024-07-15 13:01:16.502315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.294 [2024-07-15 13:01:16.502324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.294 [2024-07-15 13:01:16.502345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.294 [2024-07-15 13:01:16.502430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.294 [2024-07-15 13:01:16.502437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.294 [2024-07-15 13:01:16.502442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.502446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.295 [2024-07-15 13:01:16.502453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.295 [2024-07-15 13:01:16.502464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.502469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.502474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.502482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.295 [2024-07-15 13:01:16.502502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.295 [2024-07-15 13:01:16.502584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.295 [2024-07-15 13:01:16.502592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.295 [2024-07-15 13:01:16.502596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.502601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.295 [2024-07-15 13:01:16.502606] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:04.295 [2024-07-15 13:01:16.502612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:04.295 [2024-07-15 13:01:16.502622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.295 [2024-07-15 13:01:16.502728] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:04.295 [2024-07-15 13:01:16.502741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.295 [2024-07-15 13:01:16.502752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.502757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.502762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.502783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.295 [2024-07-15 13:01:16.502807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.295 [2024-07-15 13:01:16.502896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.295 [2024-07-15 13:01:16.502908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.295 [2024-07-15 13:01:16.502913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.502918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.295 [2024-07-15 13:01:16.502924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.295 [2024-07-15 13:01:16.502936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.502942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.502946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.502954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.295 [2024-07-15 13:01:16.502976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.295 [2024-07-15 13:01:16.503061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.295 [2024-07-15 13:01:16.503073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.295 [2024-07-15 13:01:16.503078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.295 [2024-07-15 13:01:16.503100] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.295 [2024-07-15 13:01:16.503107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.503117] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:04.295 [2024-07-15 13:01:16.503130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.503143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.503158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.295 [2024-07-15 13:01:16.503183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.295 [2024-07-15 13:01:16.503333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.295 [2024-07-15 13:01:16.503353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.295 [2024-07-15 13:01:16.503359] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503364] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6a60): datao=0, datal=4096, cccid=0 00:17:04.295 [2024-07-15 13:01:16.503369] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x719840) on tqpair(0x6d6a60): expected_datao=0, payload_size=4096 00:17:04.295 [2024-07-15 13:01:16.503375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503385] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503391] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.295 [2024-07-15 13:01:16.503408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.295 [2024-07-15 13:01:16.503412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.295 [2024-07-15 13:01:16.503428] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:04.295 [2024-07-15 13:01:16.503434] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:04.295 [2024-07-15 13:01:16.503439] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:04.295 [2024-07-15 13:01:16.503444] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:04.295 [2024-07-15 13:01:16.503450] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:04.295 [2024-07-15 13:01:16.503455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.503465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.503474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.503493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.295 [2024-07-15 13:01:16.503515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.295 [2024-07-15 13:01:16.503611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.295 [2024-07-15 13:01:16.503627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.295 [2024-07-15 13:01:16.503635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.295 [2024-07-15 13:01:16.503649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.503670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.295 [2024-07-15 13:01:16.503682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.503708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.295 [2024-07-15 13:01:16.503716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.503732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.295 [2024-07-15 13:01:16.503739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.503755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.295 [2024-07-15 13:01:16.503761] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.503799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.503815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.503823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.503835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.295 [2024-07-15 13:01:16.503869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719840, cid 0, qid 0 00:17:04.295 [2024-07-15 13:01:16.503879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7199c0, cid 1, qid 0 00:17:04.295 [2024-07-15 13:01:16.503884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719b40, cid 2, qid 0 00:17:04.295 [2024-07-15 13:01:16.503890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.295 [2024-07-15 13:01:16.503895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719e40, cid 4, qid 0 00:17:04.295 [2024-07-15 13:01:16.504043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.295 [2024-07-15 13:01:16.504059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.295 [2024-07-15 13:01:16.504064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.504069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719e40) on tqpair=0x6d6a60 00:17:04.295 [2024-07-15 13:01:16.504076] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:04.295 [2024-07-15 13:01:16.504088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.504099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.504106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.504114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.504120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.504124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.504133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.295 [2024-07-15 13:01:16.504163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719e40, cid 4, qid 0 00:17:04.295 [2024-07-15 13:01:16.504247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.295 [2024-07-15 13:01:16.504268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.295 [2024-07-15 13:01:16.504277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.504284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719e40) on tqpair=0x6d6a60 00:17:04.295 [2024-07-15 13:01:16.504362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.504379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:04.295 [2024-07-15 13:01:16.504391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.504399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6a60) 00:17:04.295 [2024-07-15 13:01:16.504412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.295 [2024-07-15 13:01:16.504443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719e40, cid 4, qid 0 00:17:04.295 [2024-07-15 13:01:16.504552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.295 [2024-07-15 13:01:16.504569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.295 [2024-07-15 13:01:16.504575] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.504582] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6a60): datao=0, datal=4096, cccid=4 00:17:04.295 [2024-07-15 13:01:16.504591] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x719e40) on tqpair(0x6d6a60): expected_datao=0, payload_size=4096 00:17:04.295 [2024-07-15 13:01:16.504599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.504612] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.295 [2024-07-15 13:01:16.504619] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.504630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.296 [2024-07-15 13:01:16.504637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.296 [2024-07-15 13:01:16.504641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.504646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719e40) on tqpair=0x6d6a60 00:17:04.296 [2024-07-15 13:01:16.504665] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:04.296 [2024-07-15 13:01:16.504677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.504690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.504699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.504704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.504713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.504738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719e40, cid 4, qid 0 00:17:04.296 [2024-07-15 13:01:16.504882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.296 [2024-07-15 13:01:16.504898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.296 [2024-07-15 13:01:16.504904] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.504908] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6a60): datao=0, datal=4096, cccid=4 00:17:04.296 [2024-07-15 13:01:16.504914] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x719e40) on tqpair(0x6d6a60): expected_datao=0, payload_size=4096 00:17:04.296 [2024-07-15 13:01:16.504919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.504930] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.504937] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.504951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.296 [2024-07-15 13:01:16.504963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.296 [2024-07-15 13:01:16.504970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.504977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719e40) on tqpair=0x6d6a60 00:17:04.296 [2024-07-15 13:01:16.505003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.505024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.505036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.505050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.505077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719e40, cid 4, qid 0 00:17:04.296 [2024-07-15 13:01:16.505184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.296 [2024-07-15 13:01:16.505202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.296 [2024-07-15 13:01:16.505207] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505212] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6a60): datao=0, datal=4096, cccid=4 00:17:04.296 [2024-07-15 13:01:16.505217] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x719e40) on tqpair(0x6d6a60): expected_datao=0, payload_size=4096 00:17:04.296 [2024-07-15 13:01:16.505223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505231] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505235] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.296 [2024-07-15 13:01:16.505252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.296 [2024-07-15 13:01:16.505256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719e40) on tqpair=0x6d6a60 00:17:04.296 [2024-07-15 13:01:16.505274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.505290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.505308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.505317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.505323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.505331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.505340] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:04.296 [2024-07-15 13:01:16.505348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:04.296 [2024-07-15 13:01:16.505358] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:04.296 [2024-07-15 13:01:16.505384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.505400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.505409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.505425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.296 [2024-07-15 13:01:16.505463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719e40, cid 4, qid 0 00:17:04.296 [2024-07-15 13:01:16.505475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719fc0, cid 5, qid 0 00:17:04.296 [2024-07-15 13:01:16.505576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.296 [2024-07-15 13:01:16.505591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.296 [2024-07-15 13:01:16.505596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719e40) on tqpair=0x6d6a60 00:17:04.296 [2024-07-15 13:01:16.505609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.296 [2024-07-15 13:01:16.505616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.296 [2024-07-15 13:01:16.505622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719fc0) on tqpair=0x6d6a60 00:17:04.296 [2024-07-15 13:01:16.505648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.505657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.505668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.505699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719fc0, cid 5, qid 0 00:17:04.296 [2024-07-15 13:01:16.509789] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.296 [2024-07-15 13:01:16.509811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.296 [2024-07-15 13:01:16.509817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.509822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719fc0) on tqpair=0x6d6a60 00:17:04.296 [2024-07-15 13:01:16.509837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.509843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.509852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.509882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719fc0, cid 5, qid 0 00:17:04.296 [2024-07-15 13:01:16.509983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.296 [2024-07-15 13:01:16.509994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.296 [2024-07-15 13:01:16.510002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719fc0) on tqpair=0x6d6a60 00:17:04.296 [2024-07-15 13:01:16.510027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.510046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.510070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719fc0, cid 5, qid 0 00:17:04.296 [2024-07-15 13:01:16.510157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.296 [2024-07-15 13:01:16.510168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.296 [2024-07-15 13:01:16.510173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719fc0) on tqpair=0x6d6a60 00:17:04.296 [2024-07-15 13:01:16.510202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.510223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.510233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.510245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.510254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.510265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.510280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6d6a60) 00:17:04.296 [2024-07-15 13:01:16.510301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.296 [2024-07-15 13:01:16.510336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719fc0, cid 5, qid 0 00:17:04.296 [2024-07-15 13:01:16.510345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719e40, cid 4, qid 0 00:17:04.296 [2024-07-15 13:01:16.510351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x71a140, cid 6, qid 0 00:17:04.296 [2024-07-15 13:01:16.510356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x71a2c0, cid 7, qid 0 00:17:04.296 [2024-07-15 13:01:16.510569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.296 [2024-07-15 13:01:16.510588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.296 [2024-07-15 13:01:16.510593] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510598] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6a60): datao=0, datal=8192, cccid=5 00:17:04.296 [2024-07-15 13:01:16.510604] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x719fc0) on tqpair(0x6d6a60): expected_datao=0, payload_size=8192 00:17:04.296 [2024-07-15 13:01:16.510609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510629] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510634] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.296 [2024-07-15 13:01:16.510648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.296 [2024-07-15 13:01:16.510652] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.296 [2024-07-15 13:01:16.510656] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6a60): datao=0, datal=512, cccid=4 00:17:04.296 [2024-07-15 13:01:16.510662] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x719e40) on tqpair(0x6d6a60): expected_datao=0, payload_size=512 00:17:04.297 [2024-07-15 13:01:16.510667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510674] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510679] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.297 [2024-07-15 13:01:16.510697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.297 [2024-07-15 13:01:16.510704] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510711] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6a60): datao=0, datal=512, cccid=6 00:17:04.297 [2024-07-15 13:01:16.510719] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x71a140) on tqpair(0x6d6a60): expected_datao=0, payload_size=512 00:17:04.297 [2024-07-15 13:01:16.510727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510738] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510744] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:04.297 [2024-07-15 13:01:16.510757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:04.297 [2024-07-15 13:01:16.510761] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510780] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6d6a60): datao=0, datal=4096, cccid=7 00:17:04.297 [2024-07-15 13:01:16.510787] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x71a2c0) on tqpair(0x6d6a60): expected_datao=0, payload_size=4096 00:17:04.297 [2024-07-15 13:01:16.510794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510806] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510814] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.297 [2024-07-15 13:01:16.510840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.297 [2024-07-15 13:01:16.510848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719fc0) on tqpair=0x6d6a60 00:17:04.297 [2024-07-15 13:01:16.510877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.297 [2024-07-15 13:01:16.510886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.297 [2024-07-15 13:01:16.510890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719e40) on tqpair=0x6d6a60 00:17:04.297 [2024-07-15 13:01:16.510907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.297 [2024-07-15 13:01:16.510915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.297 [2024-07-15 13:01:16.510919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x71a140) on tqpair=0x6d6a60 00:17:04.297 [2024-07-15 13:01:16.510933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.297 [2024-07-15 13:01:16.510943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.297 [2024-07-15 13:01:16.510950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.297 [2024-07-15 13:01:16.510957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x71a2c0) on tqpair=0x6d6a60 00:17:04.297 ===================================================== 00:17:04.297 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.297 ===================================================== 00:17:04.297 Controller Capabilities/Features 00:17:04.297 ================================ 00:17:04.297 Vendor ID: 8086 00:17:04.297 Subsystem Vendor ID: 8086 00:17:04.297 Serial Number: SPDK00000000000001 00:17:04.297 Model Number: SPDK bdev Controller 00:17:04.297 Firmware Version: 24.09 00:17:04.297 Recommended Arb Burst: 6 00:17:04.297 IEEE OUI Identifier: e4 d2 5c 00:17:04.297 Multi-path I/O 00:17:04.297 May have multiple subsystem ports: Yes 00:17:04.297 May have multiple controllers: Yes 00:17:04.297 Associated with SR-IOV VF: No 00:17:04.297 Max Data Transfer Size: 131072 00:17:04.297 Max Number of Namespaces: 32 00:17:04.297 Max Number of I/O Queues: 127 00:17:04.297 NVMe Specification Version (VS): 1.3 00:17:04.297 NVMe Specification Version (Identify): 1.3 00:17:04.297 Maximum Queue Entries: 128 00:17:04.297 Contiguous Queues Required: Yes 00:17:04.297 Arbitration Mechanisms Supported 00:17:04.297 Weighted Round Robin: Not Supported 00:17:04.297 Vendor Specific: Not Supported 00:17:04.297 Reset Timeout: 15000 ms 00:17:04.297 Doorbell Stride: 4 bytes 00:17:04.297 NVM Subsystem Reset: Not Supported 00:17:04.297 Command Sets Supported 00:17:04.297 NVM Command Set: Supported 00:17:04.297 Boot Partition: Not Supported 00:17:04.297 Memory Page Size Minimum: 4096 bytes 00:17:04.297 Memory Page Size Maximum: 4096 bytes 00:17:04.297 Persistent Memory Region: Not Supported 00:17:04.297 Optional Asynchronous Events Supported 00:17:04.297 Namespace Attribute Notices: Supported 00:17:04.297 Firmware Activation Notices: Not Supported 00:17:04.297 ANA Change Notices: Not Supported 00:17:04.297 PLE Aggregate Log Change Notices: Not Supported 00:17:04.297 LBA Status Info Alert Notices: Not Supported 00:17:04.297 EGE Aggregate Log Change Notices: Not Supported 00:17:04.297 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.297 Zone Descriptor Change Notices: Not Supported 00:17:04.297 Discovery Log Change Notices: Not Supported 00:17:04.297 Controller Attributes 00:17:04.297 128-bit Host Identifier: Supported 00:17:04.297 Non-Operational Permissive Mode: Not Supported 00:17:04.297 NVM Sets: Not Supported 00:17:04.297 Read Recovery Levels: Not Supported 00:17:04.297 Endurance Groups: Not Supported 00:17:04.297 Predictable Latency Mode: Not Supported 00:17:04.297 Traffic Based Keep ALive: Not Supported 00:17:04.297 Namespace Granularity: Not Supported 00:17:04.297 SQ Associations: Not Supported 00:17:04.297 UUID List: Not Supported 00:17:04.297 Multi-Domain Subsystem: Not Supported 00:17:04.297 Fixed Capacity Management: Not Supported 00:17:04.297 Variable Capacity Management: Not Supported 00:17:04.297 Delete Endurance Group: Not Supported 00:17:04.297 Delete NVM Set: Not Supported 00:17:04.297 Extended LBA Formats Supported: Not Supported 00:17:04.297 Flexible Data Placement Supported: Not Supported 00:17:04.297 00:17:04.297 Controller Memory Buffer Support 00:17:04.297 ================================ 00:17:04.297 Supported: No 00:17:04.297 00:17:04.297 Persistent Memory Region Support 00:17:04.297 ================================ 00:17:04.297 Supported: No 00:17:04.297 00:17:04.297 Admin Command Set Attributes 00:17:04.297 ============================ 00:17:04.297 Security Send/Receive: Not Supported 00:17:04.297 Format NVM: Not Supported 00:17:04.297 Firmware Activate/Download: Not Supported 00:17:04.297 Namespace Management: Not Supported 00:17:04.297 Device Self-Test: Not Supported 00:17:04.297 Directives: Not Supported 00:17:04.297 NVMe-MI: Not Supported 00:17:04.297 Virtualization Management: Not Supported 00:17:04.297 Doorbell Buffer Config: Not Supported 00:17:04.297 Get LBA Status Capability: Not Supported 00:17:04.297 Command & Feature Lockdown Capability: Not Supported 00:17:04.297 Abort Command Limit: 4 00:17:04.297 Async Event Request Limit: 4 00:17:04.297 Number of Firmware Slots: N/A 00:17:04.297 Firmware Slot 1 Read-Only: N/A 00:17:04.297 Firmware Activation Without Reset: N/A 00:17:04.297 Multiple Update Detection Support: N/A 00:17:04.297 Firmware Update Granularity: No Information Provided 00:17:04.297 Per-Namespace SMART Log: No 00:17:04.297 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.297 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:04.297 Command Effects Log Page: Supported 00:17:04.297 Get Log Page Extended Data: Supported 00:17:04.297 Telemetry Log Pages: Not Supported 00:17:04.297 Persistent Event Log Pages: Not Supported 00:17:04.297 Supported Log Pages Log Page: May Support 00:17:04.297 Commands Supported & Effects Log Page: Not Supported 00:17:04.297 Feature Identifiers & Effects Log Page:May Support 00:17:04.297 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.297 Data Area 4 for Telemetry Log: Not Supported 00:17:04.298 Error Log Page Entries Supported: 128 00:17:04.298 Keep Alive: Supported 00:17:04.298 Keep Alive Granularity: 10000 ms 00:17:04.298 00:17:04.298 NVM Command Set Attributes 00:17:04.298 ========================== 00:17:04.298 Submission Queue Entry Size 00:17:04.298 Max: 64 00:17:04.298 Min: 64 00:17:04.298 Completion Queue Entry Size 00:17:04.298 Max: 16 00:17:04.298 Min: 16 00:17:04.298 Number of Namespaces: 32 00:17:04.298 Compare Command: Supported 00:17:04.298 Write Uncorrectable Command: Not Supported 00:17:04.298 Dataset Management Command: Supported 00:17:04.298 Write Zeroes Command: Supported 00:17:04.298 Set Features Save Field: Not Supported 00:17:04.298 Reservations: Supported 00:17:04.298 Timestamp: Not Supported 00:17:04.298 Copy: Supported 00:17:04.298 Volatile Write Cache: Present 00:17:04.298 Atomic Write Unit (Normal): 1 00:17:04.298 Atomic Write Unit (PFail): 1 00:17:04.298 Atomic Compare & Write Unit: 1 00:17:04.298 Fused Compare & Write: Supported 00:17:04.298 Scatter-Gather List 00:17:04.298 SGL Command Set: Supported 00:17:04.298 SGL Keyed: Supported 00:17:04.298 SGL Bit Bucket Descriptor: Not Supported 00:17:04.298 SGL Metadata Pointer: Not Supported 00:17:04.298 Oversized SGL: Not Supported 00:17:04.298 SGL Metadata Address: Not Supported 00:17:04.298 SGL Offset: Supported 00:17:04.298 Transport SGL Data Block: Not Supported 00:17:04.298 Replay Protected Memory Block: Not Supported 00:17:04.298 00:17:04.298 Firmware Slot Information 00:17:04.298 ========================= 00:17:04.298 Active slot: 1 00:17:04.298 Slot 1 Firmware Revision: 24.09 00:17:04.298 00:17:04.298 00:17:04.298 Commands Supported and Effects 00:17:04.298 ============================== 00:17:04.298 Admin Commands 00:17:04.298 -------------- 00:17:04.298 Get Log Page (02h): Supported 00:17:04.298 Identify (06h): Supported 00:17:04.298 Abort (08h): Supported 00:17:04.298 Set Features (09h): Supported 00:17:04.298 Get Features (0Ah): Supported 00:17:04.298 Asynchronous Event Request (0Ch): Supported 00:17:04.298 Keep Alive (18h): Supported 00:17:04.298 I/O Commands 00:17:04.298 ------------ 00:17:04.298 Flush (00h): Supported LBA-Change 00:17:04.298 Write (01h): Supported LBA-Change 00:17:04.298 Read (02h): Supported 00:17:04.298 Compare (05h): Supported 00:17:04.298 Write Zeroes (08h): Supported LBA-Change 00:17:04.298 Dataset Management (09h): Supported LBA-Change 00:17:04.298 Copy (19h): Supported LBA-Change 00:17:04.298 00:17:04.298 Error Log 00:17:04.298 ========= 00:17:04.298 00:17:04.298 Arbitration 00:17:04.298 =========== 00:17:04.298 Arbitration Burst: 1 00:17:04.298 00:17:04.298 Power Management 00:17:04.298 ================ 00:17:04.298 Number of Power States: 1 00:17:04.298 Current Power State: Power State #0 00:17:04.298 Power State #0: 00:17:04.298 Max Power: 0.00 W 00:17:04.298 Non-Operational State: Operational 00:17:04.298 Entry Latency: Not Reported 00:17:04.298 Exit Latency: Not Reported 00:17:04.298 Relative Read Throughput: 0 00:17:04.298 Relative Read Latency: 0 00:17:04.298 Relative Write Throughput: 0 00:17:04.298 Relative Write Latency: 0 00:17:04.298 Idle Power: Not Reported 00:17:04.298 Active Power: Not Reported 00:17:04.298 Non-Operational Permissive Mode: Not Supported 00:17:04.298 00:17:04.298 Health Information 00:17:04.298 ================== 00:17:04.298 Critical Warnings: 00:17:04.298 Available Spare Space: OK 00:17:04.298 Temperature: OK 00:17:04.298 Device Reliability: OK 00:17:04.298 Read Only: No 00:17:04.298 Volatile Memory Backup: OK 00:17:04.298 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:04.298 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:04.298 Available Spare: 0% 00:17:04.298 Available Spare Threshold: 0% 00:17:04.298 Life Percentage Used:[2024-07-15 13:01:16.511083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.298 [2024-07-15 13:01:16.511106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6d6a60) 00:17:04.298 [2024-07-15 13:01:16.511120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.298 [2024-07-15 13:01:16.511156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x71a2c0, cid 7, qid 0 00:17:04.298 [2024-07-15 13:01:16.511265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.298 [2024-07-15 13:01:16.511273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.298 [2024-07-15 13:01:16.511278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.298 [2024-07-15 13:01:16.511283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x71a2c0) on tqpair=0x6d6a60 00:17:04.298 [2024-07-15 13:01:16.511326] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:04.298 [2024-07-15 13:01:16.511345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719840) on tqpair=0x6d6a60 00:17:04.298 [2024-07-15 13:01:16.511354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.298 [2024-07-15 13:01:16.511360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7199c0) on tqpair=0x6d6a60 00:17:04.298 [2024-07-15 13:01:16.511366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.298 [2024-07-15 13:01:16.511372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719b40) on tqpair=0x6d6a60 00:17:04.298 [2024-07-15 13:01:16.511378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.298 [2024-07-15 13:01:16.511384] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.298 [2024-07-15 13:01:16.511389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.298 [2024-07-15 13:01:16.511400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.298 [2024-07-15 13:01:16.511406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.298 [2024-07-15 13:01:16.511410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.298 [2024-07-15 13:01:16.511419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.298 [2024-07-15 13:01:16.511444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.298 [2024-07-15 13:01:16.511531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.298 [2024-07-15 13:01:16.511539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.298 [2024-07-15 13:01:16.511543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.298 [2024-07-15 13:01:16.511548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.298 [2024-07-15 13:01:16.511556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.298 [2024-07-15 13:01:16.511561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.298 [2024-07-15 13:01:16.511565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.298 [2024-07-15 13:01:16.511574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.298 [2024-07-15 13:01:16.511597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.298 [2024-07-15 13:01:16.511715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.298 [2024-07-15 13:01:16.511731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.298 [2024-07-15 13:01:16.511736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.511741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.511747] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:04.299 [2024-07-15 13:01:16.511753] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:04.299 [2024-07-15 13:01:16.511777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.511784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.511789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.511797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.511820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.511907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.511914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.511918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.511923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.511935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.511941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.511945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.511953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.511973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.512055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.512062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.512067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.512082] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.512100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.512119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.512201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.512216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.512221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.512237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.512255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.512276] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.512356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.512363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.512368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.512383] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.512401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.512420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.512501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.512508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.512513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.512528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.512546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.512565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.512651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.512664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.512668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.512685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.512703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.512723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.512829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.512838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.512843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.512859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.512869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.512877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.512898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.512985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.512993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.512997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.513002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.513013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.513019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.513023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.513032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.513052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.513131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.513146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.513151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.513156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.299 [2024-07-15 13:01:16.513168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.513173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.513178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.299 [2024-07-15 13:01:16.513186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.299 [2024-07-15 13:01:16.513206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.299 [2024-07-15 13:01:16.513290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.299 [2024-07-15 13:01:16.513298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.299 [2024-07-15 13:01:16.513302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.299 [2024-07-15 13:01:16.513307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.300 [2024-07-15 13:01:16.513318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.513323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.513328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.300 [2024-07-15 13:01:16.513335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.300 [2024-07-15 13:01:16.513355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.300 [2024-07-15 13:01:16.513433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.300 [2024-07-15 13:01:16.513441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.300 [2024-07-15 13:01:16.513445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.513450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.300 [2024-07-15 13:01:16.513461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.513466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.513470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.300 [2024-07-15 13:01:16.513478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.300 [2024-07-15 13:01:16.513497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.300 [2024-07-15 13:01:16.513580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.300 [2024-07-15 13:01:16.513592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.300 [2024-07-15 13:01:16.513597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.513601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.300 [2024-07-15 13:01:16.513613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.513619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.513623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.300 [2024-07-15 13:01:16.513631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.300 [2024-07-15 13:01:16.513651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.300 [2024-07-15 13:01:16.513732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.300 [2024-07-15 13:01:16.513739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.300 [2024-07-15 13:01:16.513743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.513748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.300 [2024-07-15 13:01:16.513759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.517785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.517794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6d6a60) 00:17:04.300 [2024-07-15 13:01:16.517805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.300 [2024-07-15 13:01:16.517836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x719cc0, cid 3, qid 0 00:17:04.300 [2024-07-15 13:01:16.517948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:04.300 [2024-07-15 13:01:16.517957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:04.300 [2024-07-15 13:01:16.517961] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:04.300 [2024-07-15 13:01:16.517966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x719cc0) on tqpair=0x6d6a60 00:17:04.300 [2024-07-15 13:01:16.517976] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:17:04.300 0% 00:17:04.300 Data Units Read: 0 00:17:04.300 Data Units Written: 0 00:17:04.300 Host Read Commands: 0 00:17:04.300 Host Write Commands: 0 00:17:04.300 Controller Busy Time: 0 minutes 00:17:04.300 Power Cycles: 0 00:17:04.300 Power On Hours: 0 hours 00:17:04.300 Unsafe Shutdowns: 0 00:17:04.300 Unrecoverable Media Errors: 0 00:17:04.300 Lifetime Error Log Entries: 0 00:17:04.300 Warning Temperature Time: 0 minutes 00:17:04.300 Critical Temperature Time: 0 minutes 00:17:04.300 00:17:04.300 Number of Queues 00:17:04.300 ================ 00:17:04.300 Number of I/O Submission Queues: 127 00:17:04.300 Number of I/O Completion Queues: 127 00:17:04.300 00:17:04.300 Active Namespaces 00:17:04.300 ================= 00:17:04.300 Namespace ID:1 00:17:04.300 Error Recovery Timeout: Unlimited 00:17:04.300 Command Set Identifier: NVM (00h) 00:17:04.300 Deallocate: Supported 00:17:04.300 Deallocated/Unwritten Error: Not Supported 00:17:04.300 Deallocated Read Value: Unknown 00:17:04.300 Deallocate in Write Zeroes: Not Supported 00:17:04.300 Deallocated Guard Field: 0xFFFF 00:17:04.300 Flush: Supported 00:17:04.300 Reservation: Supported 00:17:04.300 Namespace Sharing Capabilities: Multiple Controllers 00:17:04.300 Size (in LBAs): 131072 (0GiB) 00:17:04.300 Capacity (in LBAs): 131072 (0GiB) 00:17:04.300 Utilization (in LBAs): 131072 (0GiB) 00:17:04.300 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:04.300 EUI64: ABCDEF0123456789 00:17:04.300 UUID: 36d60d29-bdc8-41d7-9bf9-f54fd1c249ad 00:17:04.300 Thin Provisioning: Not Supported 00:17:04.300 Per-NS Atomic Units: Yes 00:17:04.300 Atomic Boundary Size (Normal): 0 00:17:04.300 Atomic Boundary Size (PFail): 0 00:17:04.300 Atomic Boundary Offset: 0 00:17:04.300 Maximum Single Source Range Length: 65535 00:17:04.300 Maximum Copy Length: 65535 00:17:04.300 Maximum Source Range Count: 1 00:17:04.300 NGUID/EUI64 Never Reused: No 00:17:04.300 Namespace Write Protected: No 00:17:04.300 Number of LBA Formats: 1 00:17:04.300 Current LBA Format: LBA Format #00 00:17:04.300 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:04.300 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # nvmfcleanup 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.300 rmmod nvme_tcp 00:17:04.300 rmmod nvme_fabrics 00:17:04.300 rmmod nvme_keyring 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@493 -- # '[' -n 86751 ']' 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@494 -- # killprocess 86751 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86751 ']' 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86751 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86751 00:17:04.300 killing process with pid 86751 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86751' 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86751 00:17:04.300 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86751 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@282 -- # remove_spdk_ns 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:17:04.558 00:17:04.558 real 0m2.633s 00:17:04.558 user 0m7.751s 00:17:04.558 sys 0m0.581s 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:04.558 ************************************ 00:17:04.558 END TEST nvmf_identify 00:17:04.558 ************************************ 00:17:04.558 13:01:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.558 13:01:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:04.558 13:01:16 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:04.558 13:01:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:04.558 13:01:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.558 13:01:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:04.558 ************************************ 00:17:04.558 START TEST nvmf_perf 00:17:04.558 ************************************ 00:17:04.558 13:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:04.558 * Looking for test storage... 00:17:04.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.815 13:01:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:04.816 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@452 -- # prepare_net_devs 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # local -g is_hw=no 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # remove_spdk_ns 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@436 -- # nvmf_veth_init 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:17:04.816 Cannot find device "nvmf_tgt_br" 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.816 Cannot find device "nvmf_tgt_br2" 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # true 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:17:04.816 Cannot find device "nvmf_tgt_br" 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:17:04.816 Cannot find device "nvmf_tgt_br2" 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.816 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:05.074 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:17:05.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:17:05.074 00:17:05.074 --- 10.0.0.2 ping statistics --- 00:17:05.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.075 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:17:05.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:05.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:05.075 00:17:05.075 --- 10.0.0.3 ping statistics --- 00:17:05.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.075 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:05.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:17:05.075 00:17:05.075 --- 10.0.0.1 ping statistics --- 00:17:05.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.075 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@437 -- # return 0 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@485 -- # nvmfpid=86980 00:17:05.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@486 -- # waitforlisten 86980 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 86980 ']' 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.075 13:01:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:05.333 [2024-07-15 13:01:17.546124] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:17:05.333 [2024-07-15 13:01:17.546277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.333 [2024-07-15 13:01:17.688365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.333 [2024-07-15 13:01:17.776083] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.333 [2024-07-15 13:01:17.776394] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.333 [2024-07-15 13:01:17.776573] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.333 [2024-07-15 13:01:17.776834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.333 [2024-07-15 13:01:17.776969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.333 [2024-07-15 13:01:17.777192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.333 [2024-07-15 13:01:17.777273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.333 [2024-07-15 13:01:17.777885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.333 [2024-07-15 13:01:17.777903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.268 13:01:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.268 13:01:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:17:06.268 13:01:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:17:06.268 13:01:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:06.268 13:01:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:06.268 13:01:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.268 13:01:18 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:06.268 13:01:18 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:06.833 13:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:06.833 13:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:07.090 13:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:07.090 13:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.347 13:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:07.347 13:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:07.347 13:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:07.347 13:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:07.347 13:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:07.912 [2024-07-15 13:01:20.100723] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.912 13:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:08.168 13:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:08.168 13:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.426 13:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:08.426 13:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:08.683 13:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.248 [2024-07-15 13:01:21.503847] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.248 13:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:09.504 13:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:09.504 13:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:09.504 13:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:09.504 13:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:10.887 Initializing NVMe Controllers 00:17:10.887 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:10.887 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:10.887 Initialization complete. Launching workers. 00:17:10.887 ======================================================== 00:17:10.887 Latency(us) 00:17:10.887 Device Information : IOPS MiB/s Average min max 00:17:10.887 PCIE (0000:00:10.0) NSID 1 from core 0: 25141.00 98.21 1272.12 289.76 5436.89 00:17:10.887 ======================================================== 00:17:10.887 Total : 25141.00 98.21 1272.12 289.76 5436.89 00:17:10.887 00:17:10.887 13:01:23 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:12.265 Initializing NVMe Controllers 00:17:12.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:12.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:12.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:12.265 Initialization complete. Launching workers. 00:17:12.265 ======================================================== 00:17:12.265 Latency(us) 00:17:12.265 Device Information : IOPS MiB/s Average min max 00:17:12.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1943.24 7.59 514.36 214.66 4374.56 00:17:12.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8094.87 7918.00 12009.17 00:17:12.265 ======================================================== 00:17:12.265 Total : 2067.74 8.08 970.79 214.66 12009.17 00:17:12.265 00:17:12.265 13:01:24 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:13.287 Initializing NVMe Controllers 00:17:13.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:13.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:13.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:13.287 Initialization complete. Launching workers. 00:17:13.287 ======================================================== 00:17:13.287 Latency(us) 00:17:13.287 Device Information : IOPS MiB/s Average min max 00:17:13.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7722.52 30.17 4142.80 714.75 8825.15 00:17:13.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2674.40 10.45 12073.36 6485.62 24213.46 00:17:13.287 ======================================================== 00:17:13.288 Total : 10396.92 40.61 6182.78 714.75 24213.46 00:17:13.288 00:17:13.544 13:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:13.544 13:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:16.083 Initializing NVMe Controllers 00:17:16.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.083 Controller IO queue size 128, less than required. 00:17:16.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.083 Controller IO queue size 128, less than required. 00:17:16.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:16.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:16.083 Initialization complete. Launching workers. 00:17:16.083 ======================================================== 00:17:16.083 Latency(us) 00:17:16.083 Device Information : IOPS MiB/s Average min max 00:17:16.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1054.65 263.66 123475.96 63440.60 410701.04 00:17:16.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 502.35 125.59 266443.04 102990.89 448877.54 00:17:16.083 ======================================================== 00:17:16.083 Total : 1557.00 389.25 169603.25 63440.60 448877.54 00:17:16.083 00:17:16.083 13:01:28 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:16.341 Initializing NVMe Controllers 00:17:16.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.341 Controller IO queue size 128, less than required. 00:17:16.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.341 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:16.341 Controller IO queue size 128, less than required. 00:17:16.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.341 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:16.341 WARNING: Some requested NVMe devices were skipped 00:17:16.341 No valid NVMe controllers or AIO or URING devices found 00:17:16.341 13:01:28 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:18.870 Initializing NVMe Controllers 00:17:18.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:18.870 Controller IO queue size 128, less than required. 00:17:18.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:18.870 Controller IO queue size 128, less than required. 00:17:18.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:18.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:18.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:18.870 Initialization complete. Launching workers. 00:17:18.870 00:17:18.870 ==================== 00:17:18.870 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:18.870 TCP transport: 00:17:18.870 polls: 6623 00:17:18.870 idle_polls: 3420 00:17:18.870 sock_completions: 3203 00:17:18.870 nvme_completions: 5505 00:17:18.870 submitted_requests: 8266 00:17:18.870 queued_requests: 1 00:17:18.870 00:17:18.870 ==================== 00:17:18.870 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:18.870 TCP transport: 00:17:18.870 polls: 6748 00:17:18.870 idle_polls: 3723 00:17:18.870 sock_completions: 3025 00:17:18.870 nvme_completions: 5949 00:17:18.870 submitted_requests: 8942 00:17:18.870 queued_requests: 1 00:17:18.870 ======================================================== 00:17:18.870 Latency(us) 00:17:18.870 Device Information : IOPS MiB/s Average min max 00:17:18.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1373.38 343.34 94901.80 53836.11 158591.50 00:17:18.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1484.17 371.04 86981.81 45525.22 149324.72 00:17:18.870 ======================================================== 00:17:18.870 Total : 2857.54 714.39 90788.27 45525.22 158591.50 00:17:18.870 00:17:18.870 13:01:31 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:18.870 13:01:31 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # nvmfcleanup 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.128 rmmod nvme_tcp 00:17:19.128 rmmod nvme_fabrics 00:17:19.128 rmmod nvme_keyring 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@493 -- # '[' -n 86980 ']' 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@494 -- # killprocess 86980 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 86980 ']' 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 86980 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86980 00:17:19.128 killing process with pid 86980 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86980' 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 86980 00:17:19.128 13:01:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 86980 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@282 -- # remove_spdk_ns 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:17:20.063 00:17:20.063 real 0m15.268s 00:17:20.063 user 0m56.940s 00:17:20.063 sys 0m3.426s 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:20.063 13:01:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:20.063 ************************************ 00:17:20.063 END TEST nvmf_perf 00:17:20.063 ************************************ 00:17:20.063 13:01:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:20.063 13:01:32 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:20.063 13:01:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:20.063 13:01:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.063 13:01:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:20.063 ************************************ 00:17:20.063 START TEST nvmf_fio_host 00:17:20.063 ************************************ 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:20.063 * Looking for test storage... 00:17:20.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:20.063 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:20.064 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@452 -- # prepare_net_devs 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # local -g is_hw=no 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # remove_spdk_ns 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@436 -- # nvmf_veth_init 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:17:20.064 Cannot find device "nvmf_tgt_br" 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.064 Cannot find device "nvmf_tgt_br2" 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # true 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:17:20.064 Cannot find device "nvmf_tgt_br" 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:17:20.064 Cannot find device "nvmf_tgt_br2" 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.064 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:17:20.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:17:20.324 00:17:20.324 --- 10.0.0.2 ping statistics --- 00:17:20.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.324 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:17:20.324 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.324 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:20.324 00:17:20.324 --- 10.0.0.3 ping statistics --- 00:17:20.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.324 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:20.324 00:17:20.324 --- 10.0.0.1 ping statistics --- 00:17:20.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.324 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@437 -- # return 0 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87462 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87462 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87462 ']' 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.324 13:01:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.324 [2024-07-15 13:01:32.779473] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:17:20.324 [2024-07-15 13:01:32.779791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.583 [2024-07-15 13:01:32.923892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.583 [2024-07-15 13:01:33.013742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.583 [2024-07-15 13:01:33.014013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.583 [2024-07-15 13:01:33.014266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.583 [2024-07-15 13:01:33.014430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.583 [2024-07-15 13:01:33.014635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.583 [2024-07-15 13:01:33.014825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.583 [2024-07-15 13:01:33.014955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.583 [2024-07-15 13:01:33.015636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.583 [2024-07-15 13:01:33.015675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.518 13:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.518 13:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:17:21.518 13:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:21.776 [2024-07-15 13:01:34.063796] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.776 13:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:21.776 13:01:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.776 13:01:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.776 13:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:22.033 Malloc1 00:17:22.033 13:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:22.292 13:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:22.549 13:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.807 [2024-07-15 13:01:35.256285] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:23.065 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:23.323 13:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:23.323 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:23.323 fio-3.35 00:17:23.323 Starting 1 thread 00:17:25.849 00:17:25.849 test: (groupid=0, jobs=1): err= 0: pid=87594: Mon Jul 15 13:01:37 2024 00:17:25.849 read: IOPS=8966, BW=35.0MiB/s (36.7MB/s)(70.3MiB/2007msec) 00:17:25.849 slat (usec): min=2, max=339, avg= 2.66, stdev= 3.22 00:17:25.849 clat (usec): min=3304, max=12822, avg=7461.96, stdev=604.59 00:17:25.849 lat (usec): min=3352, max=12824, avg=7464.62, stdev=604.32 00:17:25.849 clat percentiles (usec): 00:17:25.849 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:17:25.849 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7504], 00:17:25.849 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8455], 00:17:25.849 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[10683], 99.95th=[11994], 00:17:25.849 | 99.99th=[12649] 00:17:25.849 bw ( KiB/s): min=35112, max=36200, per=100.00%, avg=35874.00, stdev=512.89, samples=4 00:17:25.849 iops : min= 8778, max= 9050, avg=8968.50, stdev=128.22, samples=4 00:17:25.850 write: IOPS=8989, BW=35.1MiB/s (36.8MB/s)(70.5MiB/2007msec); 0 zone resets 00:17:25.850 slat (usec): min=2, max=249, avg= 2.77, stdev= 2.12 00:17:25.850 clat (usec): min=2402, max=12792, avg=6755.47, stdev=551.75 00:17:25.850 lat (usec): min=2416, max=12794, avg=6758.24, stdev=551.53 00:17:25.850 clat percentiles (usec): 00:17:25.850 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6390], 00:17:25.850 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6849], 00:17:25.850 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7570], 00:17:25.850 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[11338], 99.95th=[12518], 00:17:25.850 | 99.99th=[12780] 00:17:25.850 bw ( KiB/s): min=35576, max=36240, per=99.97%, avg=35946.00, stdev=309.93, samples=4 00:17:25.850 iops : min= 8894, max= 9060, avg=8986.50, stdev=77.48, samples=4 00:17:25.850 lat (msec) : 4=0.08%, 10=99.67%, 20=0.24% 00:17:25.850 cpu : usr=64.86%, sys=24.53%, ctx=9, majf=0, minf=7 00:17:25.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:25.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:25.850 issued rwts: total=17996,18041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:25.850 00:17:25.850 Run status group 0 (all jobs): 00:17:25.850 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.3MiB (73.7MB), run=2007-2007msec 00:17:25.850 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.5MiB (73.9MB), run=2007-2007msec 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:25.850 13:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:25.850 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:25.850 fio-3.35 00:17:25.850 Starting 1 thread 00:17:28.375 00:17:28.375 test: (groupid=0, jobs=1): err= 0: pid=87643: Mon Jul 15 13:01:40 2024 00:17:28.375 read: IOPS=7904, BW=124MiB/s (130MB/s)(248MiB/2007msec) 00:17:28.375 slat (usec): min=3, max=123, avg= 3.92, stdev= 1.80 00:17:28.375 clat (usec): min=2509, max=20132, avg=9634.23, stdev=2456.84 00:17:28.375 lat (usec): min=2513, max=20136, avg=9638.15, stdev=2456.88 00:17:28.375 clat percentiles (usec): 00:17:28.375 | 1.00th=[ 5014], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7373], 00:17:28.375 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10290], 00:17:28.375 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12387], 95.00th=[13698], 00:17:28.375 | 99.00th=[16319], 99.50th=[17171], 99.90th=[17957], 99.95th=[19268], 00:17:28.375 | 99.99th=[20055] 00:17:28.375 bw ( KiB/s): min=56960, max=73216, per=50.35%, avg=63672.00, stdev=7409.57, samples=4 00:17:28.375 iops : min= 3560, max= 4576, avg=3979.50, stdev=463.10, samples=4 00:17:28.375 write: IOPS=4720, BW=73.8MiB/s (77.3MB/s)(131MiB/1770msec); 0 zone resets 00:17:28.375 slat (usec): min=37, max=317, avg=39.86, stdev= 6.95 00:17:28.375 clat (usec): min=5718, max=18331, avg=11621.17, stdev=2071.31 00:17:28.375 lat (usec): min=5759, max=18368, avg=11661.03, stdev=2071.57 00:17:28.375 clat percentiles (usec): 00:17:28.375 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9896], 00:17:28.375 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:17:28.375 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14484], 95.00th=[15664], 00:17:28.375 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:17:28.375 | 99.99th=[18220] 00:17:28.375 bw ( KiB/s): min=59456, max=76288, per=88.00%, avg=66472.00, stdev=7656.83, samples=4 00:17:28.375 iops : min= 3716, max= 4768, avg=4154.50, stdev=478.55, samples=4 00:17:28.375 lat (msec) : 4=0.17%, 10=44.99%, 20=54.83%, 50=0.01% 00:17:28.375 cpu : usr=72.25%, sys=18.14%, ctx=9, majf=0, minf=24 00:17:28.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:28.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:28.375 issued rwts: total=15864,8356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:28.375 00:17:28.375 Run status group 0 (all jobs): 00:17:28.375 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=248MiB (260MB), run=2007-2007msec 00:17:28.375 WRITE: bw=73.8MiB/s (77.3MB/s), 73.8MiB/s-73.8MiB/s (77.3MB/s-77.3MB/s), io=131MiB (137MB), run=1770-1770msec 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # nvmfcleanup 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.375 rmmod nvme_tcp 00:17:28.375 rmmod nvme_fabrics 00:17:28.375 rmmod nvme_keyring 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@493 -- # '[' -n 87462 ']' 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@494 -- # killprocess 87462 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87462 ']' 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87462 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87462 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:28.375 killing process with pid 87462 00:17:28.375 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87462' 00:17:28.632 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87462 00:17:28.632 13:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87462 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@282 -- # remove_spdk_ns 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:17:28.632 00:17:28.632 real 0m8.781s 00:17:28.632 user 0m36.230s 00:17:28.632 sys 0m2.184s 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.632 13:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.632 ************************************ 00:17:28.632 END TEST nvmf_fio_host 00:17:28.632 ************************************ 00:17:28.632 13:01:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:28.632 13:01:41 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:28.632 13:01:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:28.632 13:01:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.632 13:01:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.891 ************************************ 00:17:28.891 START TEST nvmf_failover 00:17:28.891 ************************************ 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:28.891 * Looking for test storage... 00:17:28.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@452 -- # prepare_net_devs 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # local -g is_hw=no 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # remove_spdk_ns 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@436 -- # nvmf_veth_init 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:17:28.891 Cannot find device "nvmf_tgt_br" 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.891 Cannot find device "nvmf_tgt_br2" 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # true 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:17:28.891 Cannot find device "nvmf_tgt_br" 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:17:28.891 Cannot find device "nvmf_tgt_br2" 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # true 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@167 -- # true 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.891 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:17:29.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:17:29.149 00:17:29.149 --- 10.0.0.2 ping statistics --- 00:17:29.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.149 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:17:29.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:29.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:29.149 00:17:29.149 --- 10.0.0.3 ping statistics --- 00:17:29.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.149 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:29.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:29.149 00:17:29.149 --- 10.0.0.1 ping statistics --- 00:17:29.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.149 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@437 -- # return 0 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@485 -- # nvmfpid=87852 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@486 -- # waitforlisten 87852 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87852 ']' 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.149 13:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:29.407 [2024-07-15 13:01:41.667159] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:17:29.407 [2024-07-15 13:01:41.667997] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.407 [2024-07-15 13:01:41.812596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:29.665 [2024-07-15 13:01:41.881372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.665 [2024-07-15 13:01:41.881447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.665 [2024-07-15 13:01:41.881461] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.665 [2024-07-15 13:01:41.881471] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.665 [2024-07-15 13:01:41.881480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.665 [2024-07-15 13:01:41.881658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.665 [2024-07-15 13:01:41.882427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.665 [2024-07-15 13:01:41.882494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.231 13:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.231 13:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:30.231 13:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:17:30.231 13:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:30.231 13:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:30.488 13:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.488 13:01:42 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.746 [2024-07-15 13:01:42.987499] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.746 13:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:31.022 Malloc0 00:17:31.022 13:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.278 13:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.535 13:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.792 [2024-07-15 13:01:44.041883] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.792 13:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:32.050 [2024-07-15 13:01:44.282072] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:32.050 13:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:32.307 [2024-07-15 13:01:44.578377] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87969 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87969 /var/tmp/bdevperf.sock 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87969 ']' 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.307 13:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:32.564 13:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.564 13:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:32.564 13:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:33.127 NVMe0n1 00:17:33.127 13:01:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:33.385 00:17:33.385 13:01:45 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88003 00:17:33.385 13:01:45 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:33.385 13:01:45 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:34.365 13:01:46 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.622 [2024-07-15 13:01:46.953279] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953333] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953345] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953354] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953362] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953371] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953381] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953389] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953398] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953406] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953415] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953423] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953431] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953440] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953448] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953456] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953465] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953473] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 [2024-07-15 13:01:46.953491] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9fa0 is same with the state(5) to be set 00:17:34.622 13:01:46 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:37.920 13:01:49 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:37.920 00:17:37.920 13:01:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:38.179 [2024-07-15 13:01:50.612486] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612537] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612550] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612558] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612567] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612575] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612584] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612592] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612601] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612609] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612618] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612626] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612634] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612643] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612651] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612660] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612669] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612677] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612686] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612694] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612703] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612711] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612719] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612728] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612736] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612745] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612754] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612777] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612790] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612799] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612808] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612816] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612825] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612833] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612842] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612850] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612859] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612867] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612876] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612884] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.179 [2024-07-15 13:01:50.612893] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612901] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612910] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612918] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612927] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612935] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612943] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612951] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612959] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612968] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612976] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612984] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.612993] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613001] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613009] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613018] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613026] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613035] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613044] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613052] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613061] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613069] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613078] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613086] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613094] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613103] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613111] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613119] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613127] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613136] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613144] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613152] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613160] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613168] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613177] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613185] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613193] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613202] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613210] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613218] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613227] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613235] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613244] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613253] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613261] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613269] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613277] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613285] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613294] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613303] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613312] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613321] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613329] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613338] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613346] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613355] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613363] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613372] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613380] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613388] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613396] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613405] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613413] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613422] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613430] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613439] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613447] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613455] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613464] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613472] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613481] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613489] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613498] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613506] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613514] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613523] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613532] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613541] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613549] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613558] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613566] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613574] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613589] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613598] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613606] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613615] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 [2024-07-15 13:01:50.613623] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaae20 is same with the state(5) to be set 00:17:38.180 13:01:50 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:41.478 13:01:53 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.735 [2024-07-15 13:01:53.968501] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.735 13:01:53 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:42.668 13:01:54 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:42.928 [2024-07-15 13:01:55.299196] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299238] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299250] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299259] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299268] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299277] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299285] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299294] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299302] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299310] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299319] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299333] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299341] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299350] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299358] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299367] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299375] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299383] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299392] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299400] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299408] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299416] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299425] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299433] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299441] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299450] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299458] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299466] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299474] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299484] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299493] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299501] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299509] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299518] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299527] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299535] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299544] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299552] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299560] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299569] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299577] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299585] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299593] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299601] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299609] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299618] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299626] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299634] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299642] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299651] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299659] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299667] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299676] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299684] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299692] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299700] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299709] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299717] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299725] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299733] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299741] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299750] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299758] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299783] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299793] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299802] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299810] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299819] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.928 [2024-07-15 13:01:55.299827] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 [2024-07-15 13:01:55.299836] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 [2024-07-15 13:01:55.299844] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 [2024-07-15 13:01:55.299853] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 [2024-07-15 13:01:55.299861] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 [2024-07-15 13:01:55.299869] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 [2024-07-15 13:01:55.299878] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 [2024-07-15 13:01:55.299886] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 [2024-07-15 13:01:55.299894] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 [2024-07-15 13:01:55.299902] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab9a0 is same with the state(5) to be set 00:17:42.929 13:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 88003 00:17:49.567 0 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87969 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87969 ']' 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87969 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87969 00:17:49.568 killing process with pid 87969 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87969' 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87969 00:17:49.568 13:02:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87969 00:17:49.568 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:49.568 [2024-07-15 13:01:44.655913] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:17:49.568 [2024-07-15 13:01:44.656034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87969 ] 00:17:49.568 [2024-07-15 13:01:44.795308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.568 [2024-07-15 13:01:44.854278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.568 Running I/O for 15 seconds... 00:17:49.568 [2024-07-15 13:01:46.954636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.568 [2024-07-15 13:01:46.954689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.568 [2024-07-15 13:01:46.954733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.568 [2024-07-15 13:01:46.954777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.568 [2024-07-15 13:01:46.954810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.954839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.954867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.954896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.954924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.954953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.954981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.954997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.568 [2024-07-15 13:01:46.955775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.568 [2024-07-15 13:01:46.955791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.955814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.955829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.955844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.955857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.955873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.955887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.955902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.955915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.955930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.955943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.955958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.955972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.955987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.569 [2024-07-15 13:01:46.956613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.956974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.956989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.957003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.957017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.957031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.569 [2024-07-15 13:01:46.957046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.569 [2024-07-15 13:01:46.957060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.570 [2024-07-15 13:01:46.957564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.957612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74624 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.957629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.957658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.957668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74632 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.957681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.957712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.957723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74640 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.957736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.957759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.957782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74648 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.957796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.957819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.957829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74656 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.957842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.957865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.957875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74664 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.957888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.957911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.957920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74672 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.957934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.957956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.957966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74680 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.957982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.957995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.958005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.958015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74688 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.958028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.958041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.958061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.958071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74696 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.958083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.958104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.958114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.958124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74704 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.958137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.958150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.958160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.958170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74712 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.958183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.958197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.958206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.958216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74720 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.958229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.958242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.958252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.958262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74728 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.958274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.958288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.958297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.570 [2024-07-15 13:01:46.958307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74736 len:8 PRP1 0x0 PRP2 0x0 00:17:49.570 [2024-07-15 13:01:46.958320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.570 [2024-07-15 13:01:46.958334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.570 [2024-07-15 13:01:46.958344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74744 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74752 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74760 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74768 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74776 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74784 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74792 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74800 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74808 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74816 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74824 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74832 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74840 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.958962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.958972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.958982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74848 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.958995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.959008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.959018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.959028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74856 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.959041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.959054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.959063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.959073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74864 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.959086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.959099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.571 [2024-07-15 13:01:46.959109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.571 [2024-07-15 13:01:46.959119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74872 len:8 PRP1 0x0 PRP2 0x0 00:17:49.571 [2024-07-15 13:01:46.959144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.959192] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2126c90 was disconnected and freed. reset controller. 00:17:49.571 [2024-07-15 13:01:46.959210] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:49.571 [2024-07-15 13:01:46.959264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.571 [2024-07-15 13:01:46.959285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.959319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.571 [2024-07-15 13:01:46.959333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.959346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.571 [2024-07-15 13:01:46.959359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.959377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.571 [2024-07-15 13:01:46.959391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:46.959404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:49.571 [2024-07-15 13:01:46.963364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:49.571 [2024-07-15 13:01:46.963406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20aae30 (9): Bad file descriptor 00:17:49.571 [2024-07-15 13:01:46.999596] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:49.571 [2024-07-15 13:01:50.615044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.571 [2024-07-15 13:01:50.615092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:50.615118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.571 [2024-07-15 13:01:50.615146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:50.615163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.571 [2024-07-15 13:01:50.615177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:50.615193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.571 [2024-07-15 13:01:50.615207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:50.615222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.571 [2024-07-15 13:01:50.615235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:50.615250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.571 [2024-07-15 13:01:50.615263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:50.615278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.571 [2024-07-15 13:01:50.615292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.571 [2024-07-15 13:01:50.615306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.571 [2024-07-15 13:01:50.615320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.572 [2024-07-15 13:01:50.615779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.572 [2024-07-15 13:01:50.615806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.615823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.615837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.615853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.615866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.615881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.615894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.615913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.615926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.615941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.615955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.615969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.615983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.615998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.616976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.616990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.617005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.573 [2024-07-15 13:01:50.617018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.617034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.573 [2024-07-15 13:01:50.617047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.617062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.573 [2024-07-15 13:01:50.617075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.617091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.573 [2024-07-15 13:01:50.617105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.573 [2024-07-15 13:01:50.617121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.617972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.617987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.574 [2024-07-15 13:01:50.618375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.574 [2024-07-15 13:01:50.618388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.618977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.618992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.575 [2024-07-15 13:01:50.619006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.619039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.575 [2024-07-15 13:01:50.619055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:17:49.575 [2024-07-15 13:01:50.619068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.619086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.575 [2024-07-15 13:01:50.619096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.575 [2024-07-15 13:01:50.619107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:17:49.575 [2024-07-15 13:01:50.619120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.619192] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2128b80 was disconnected and freed. reset controller. 00:17:49.575 [2024-07-15 13:01:50.619211] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:49.575 [2024-07-15 13:01:50.619267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.575 [2024-07-15 13:01:50.619288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.619303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.575 [2024-07-15 13:01:50.619316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.619330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.575 [2024-07-15 13:01:50.619343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.619357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.575 [2024-07-15 13:01:50.619370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:50.619383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:49.575 [2024-07-15 13:01:50.619427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20aae30 (9): Bad file descriptor 00:17:49.575 [2024-07-15 13:01:50.623417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:49.575 [2024-07-15 13:01:50.659779] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:49.575 [2024-07-15 13:01:55.298865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.575 [2024-07-15 13:01:55.298932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.298952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.575 [2024-07-15 13:01:55.298966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.298980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.575 [2024-07-15 13:01:55.298993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.299007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.575 [2024-07-15 13:01:55.299020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.299033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aae30 is same with the state(5) to be set 00:17:49.575 [2024-07-15 13:01:55.300201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.575 [2024-07-15 13:01:55.300544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.575 [2024-07-15 13:01:55.300558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.300971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.300987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.576 [2024-07-15 13:01:55.301646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.576 [2024-07-15 13:01:55.301661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.301674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.301977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.301997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.577 [2024-07-15 13:01:55.302638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.577 [2024-07-15 13:01:55.302945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.577 [2024-07-15 13:01:55.302960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.302973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.302988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.303979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.303994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.304008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.304023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.304037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.304054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.304077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.304098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.578 [2024-07-15 13:01:55.304112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.304142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:49.578 [2024-07-15 13:01:55.304156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:49.578 [2024-07-15 13:01:55.304167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25232 len:8 PRP1 0x0 PRP2 0x0 00:17:49.578 [2024-07-15 13:01:55.304180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.578 [2024-07-15 13:01:55.304227] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2137c30 was disconnected and freed. reset controller. 00:17:49.578 [2024-07-15 13:01:55.304246] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:49.578 [2024-07-15 13:01:55.304261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:49.578 [2024-07-15 13:01:55.308260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:49.578 [2024-07-15 13:01:55.308301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20aae30 (9): Bad file descriptor 00:17:49.578 [2024-07-15 13:01:55.341666] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:49.578 00:17:49.578 Latency(us) 00:17:49.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.578 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:49.579 Verification LBA range: start 0x0 length 0x4000 00:17:49.579 NVMe0n1 : 15.01 8787.43 34.33 206.10 0.00 14198.11 644.19 21805.61 00:17:49.579 =================================================================================================================== 00:17:49.579 Total : 8787.43 34.33 206.10 0.00 14198.11 644.19 21805.61 00:17:49.579 Received shutdown signal, test time was about 15.000000 seconds 00:17:49.579 00:17:49.579 Latency(us) 00:17:49.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.579 =================================================================================================================== 00:17:49.579 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88205 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88205 /var/tmp/bdevperf.sock 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88205 ']' 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:49.579 [2024-07-15 13:02:01.634331] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:49.579 [2024-07-15 13:02:01.878576] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:49.579 13:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:49.836 NVMe0n1 00:17:49.836 13:02:02 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:50.095 00:17:50.095 13:02:02 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:50.352 00:17:50.609 13:02:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:50.609 13:02:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:50.866 13:02:03 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:50.866 13:02:03 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:54.274 13:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:54.274 13:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:54.274 13:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88324 00:17:54.274 13:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:54.274 13:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88324 00:17:55.644 0 00:17:55.644 13:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:55.644 [2024-07-15 13:02:01.077902] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:17:55.644 [2024-07-15 13:02:01.077994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88205 ] 00:17:55.644 [2024-07-15 13:02:01.218775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.644 [2024-07-15 13:02:01.282677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.644 [2024-07-15 13:02:03.305115] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:55.644 [2024-07-15 13:02:03.305234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.644 [2024-07-15 13:02:03.305261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.644 [2024-07-15 13:02:03.305280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.644 [2024-07-15 13:02:03.305294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.644 [2024-07-15 13:02:03.305309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.644 [2024-07-15 13:02:03.305323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.644 [2024-07-15 13:02:03.305338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.644 [2024-07-15 13:02:03.305352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.644 [2024-07-15 13:02:03.305367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.644 [2024-07-15 13:02:03.305411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.644 [2024-07-15 13:02:03.305448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8be30 (9): Bad file descriptor 00:17:55.644 [2024-07-15 13:02:03.315194] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:55.644 Running I/O for 1 seconds... 00:17:55.644 00:17:55.644 Latency(us) 00:17:55.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.644 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:55.644 Verification LBA range: start 0x0 length 0x4000 00:17:55.644 NVMe0n1 : 1.01 8989.51 35.12 0.00 0.00 14153.08 1206.46 14239.19 00:17:55.644 =================================================================================================================== 00:17:55.644 Total : 8989.51 35.12 0.00 0.00 14153.08 1206.46 14239.19 00:17:55.644 13:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:55.644 13:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:55.644 13:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.902 13:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:55.903 13:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:56.468 13:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:56.468 13:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:59.745 13:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:59.745 13:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:59.745 13:02:12 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88205 00:17:59.745 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88205 ']' 00:17:59.745 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88205 00:17:59.745 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:59.745 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.002 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88205 00:18:00.002 killing process with pid 88205 00:18:00.002 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:00.002 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:00.002 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88205' 00:18:00.002 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88205 00:18:00.002 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88205 00:18:00.002 13:02:12 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:00.002 13:02:12 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.260 13:02:12 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:00.260 13:02:12 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:00.260 13:02:12 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:00.260 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # nvmfcleanup 00:18:00.260 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:00.260 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:00.260 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:00.260 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:00.260 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:00.260 rmmod nvme_tcp 00:18:00.260 rmmod nvme_fabrics 00:18:00.260 rmmod nvme_keyring 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@493 -- # '[' -n 87852 ']' 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@494 -- # killprocess 87852 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87852 ']' 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87852 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87852 00:18:00.518 killing process with pid 87852 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87852' 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87852 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87852 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@282 -- # remove_spdk_ns 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:18:00.518 00:18:00.518 real 0m31.868s 00:18:00.518 user 2m4.325s 00:18:00.518 sys 0m4.433s 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.518 13:02:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:00.518 ************************************ 00:18:00.518 END TEST nvmf_failover 00:18:00.518 ************************************ 00:18:00.776 13:02:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:00.776 13:02:13 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:00.776 13:02:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:00.776 13:02:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.776 13:02:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:00.776 ************************************ 00:18:00.776 START TEST nvmf_host_discovery 00:18:00.776 ************************************ 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:00.776 * Looking for test storage... 00:18:00.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.776 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@452 -- # prepare_net_devs 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # local -g is_hw=no 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # remove_spdk_ns 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@436 -- # nvmf_veth_init 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:18:00.777 Cannot find device "nvmf_tgt_br" 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.777 Cannot find device "nvmf_tgt_br2" 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # true 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:18:00.777 Cannot find device "nvmf_tgt_br" 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:18:00.777 Cannot find device "nvmf_tgt_br2" 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:00.777 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.136 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:18:01.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:01.137 00:18:01.137 --- 10.0.0.2 ping statistics --- 00:18:01.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.137 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:18:01.137 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.137 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:18:01.137 00:18:01.137 --- 10.0.0.3 ping statistics --- 00:18:01.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.137 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:01.137 00:18:01.137 --- 10.0.0.1 ping statistics --- 00:18:01.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.137 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@437 -- # return 0 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@485 -- # nvmfpid=88631 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@486 -- # waitforlisten 88631 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88631 ']' 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.137 13:02:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:01.137 [2024-07-15 13:02:13.555182] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:18:01.137 [2024-07-15 13:02:13.555283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.395 [2024-07-15 13:02:13.696797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.395 [2024-07-15 13:02:13.753585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.395 [2024-07-15 13:02:13.753636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.395 [2024-07-15 13:02:13.753649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.395 [2024-07-15 13:02:13.753657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.395 [2024-07-15 13:02:13.753665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.395 [2024-07-15 13:02:13.753688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 [2024-07-15 13:02:14.596038] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 [2024-07-15 13:02:14.604098] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 null0 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 null1 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88681 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88681 /tmp/host.sock 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88681 ']' 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:02.329 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.329 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 [2024-07-15 13:02:14.695901] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:18:02.329 [2024-07-15 13:02:14.696225] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88681 ] 00:18:02.586 [2024-07-15 13:02:14.836249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.586 [2024-07-15 13:02:14.904896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.586 13:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:02.586 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.844 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:02.845 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.103 [2024-07-15 13:02:15.360319] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:03.103 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:03.104 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.362 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:18:03.362 13:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:03.620 [2024-07-15 13:02:16.009004] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:03.620 [2024-07-15 13:02:16.009040] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:03.620 [2024-07-15 13:02:16.009059] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:03.879 [2024-07-15 13:02:16.095162] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:03.879 [2024-07-15 13:02:16.152098] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:03.879 [2024-07-15 13:02:16.152142] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:04.138 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:04.138 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:04.138 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.397 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.656 [2024-07-15 13:02:16.945055] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:04.656 [2024-07-15 13:02:16.945745] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:04.656 [2024-07-15 13:02:16.945812] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:04.656 13:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:04.656 [2024-07-15 13:02:17.033803] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.656 [2024-07-15 13:02:17.092097] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:04.656 [2024-07-15 13:02:17.092129] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:04.656 [2024-07-15 13:02:17.092137] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:04.656 13:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.034 [2024-07-15 13:02:18.234047] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:06.034 [2024-07-15 13:02:18.234090] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:06.034 [2024-07-15 13:02:18.239726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.034 [2024-07-15 13:02:18.239774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.034 [2024-07-15 13:02:18.239790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.034 [2024-07-15 13:02:18.239799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.034 [2024-07-15 13:02:18.239810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.034 [2024-07-15 13:02:18.239819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.034 [2024-07-15 13:02:18.239829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:06.034 [2024-07-15 13:02:18.239838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.034 [2024-07-15 13:02:18.239848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dc70 is same with the state(5) to be set 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.034 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:06.034 [2024-07-15 13:02:18.249681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4dc70 (9): Bad file descriptor 00:18:06.034 [2024-07-15 13:02:18.259701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:06.034 [2024-07-15 13:02:18.259836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.034 [2024-07-15 13:02:18.259859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4dc70 with addr=10.0.0.2, port=4420 00:18:06.034 [2024-07-15 13:02:18.259871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dc70 is same with the state(5) to be set 00:18:06.034 [2024-07-15 13:02:18.259890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4dc70 (9): Bad file descriptor 00:18:06.034 [2024-07-15 13:02:18.259906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:06.034 [2024-07-15 13:02:18.259915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:06.035 [2024-07-15 13:02:18.259926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:06.035 [2024-07-15 13:02:18.259943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.035 [2024-07-15 13:02:18.269775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:06.035 [2024-07-15 13:02:18.269872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.035 [2024-07-15 13:02:18.269894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4dc70 with addr=10.0.0.2, port=4420 00:18:06.035 [2024-07-15 13:02:18.269906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dc70 is same with the state(5) to be set 00:18:06.035 [2024-07-15 13:02:18.269922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4dc70 (9): Bad file descriptor 00:18:06.035 [2024-07-15 13:02:18.269937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:06.035 [2024-07-15 13:02:18.269947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:06.035 [2024-07-15 13:02:18.269956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:06.035 [2024-07-15 13:02:18.269972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:06.035 [2024-07-15 13:02:18.279840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:06.035 [2024-07-15 13:02:18.279956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.035 [2024-07-15 13:02:18.279981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4dc70 with addr=10.0.0.2, port=4420 00:18:06.035 [2024-07-15 13:02:18.279993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dc70 is same with the state(5) to be set 00:18:06.035 [2024-07-15 13:02:18.280022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4dc70 (9): Bad file descriptor 00:18:06.035 [2024-07-15 13:02:18.280037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:06.035 [2024-07-15 13:02:18.280047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:06.035 [2024-07-15 13:02:18.280056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:06.035 [2024-07-15 13:02:18.280100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:06.035 [2024-07-15 13:02:18.289919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:06.035 [2024-07-15 13:02:18.290029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.035 [2024-07-15 13:02:18.290053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4dc70 with addr=10.0.0.2, port=4420 00:18:06.035 [2024-07-15 13:02:18.290066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dc70 is same with the state(5) to be set 00:18:06.035 [2024-07-15 13:02:18.290083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4dc70 (9): Bad file descriptor 00:18:06.035 [2024-07-15 13:02:18.290110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:06.035 [2024-07-15 13:02:18.290122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:06.035 [2024-07-15 13:02:18.290131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:06.035 [2024-07-15 13:02:18.290148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.035 [2024-07-15 13:02:18.299992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:06.035 [2024-07-15 13:02:18.300095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.035 [2024-07-15 13:02:18.300119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4dc70 with addr=10.0.0.2, port=4420 00:18:06.035 [2024-07-15 13:02:18.300131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dc70 is same with the state(5) to be set 00:18:06.035 [2024-07-15 13:02:18.300150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4dc70 (9): Bad file descriptor 00:18:06.035 [2024-07-15 13:02:18.300188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:06.035 [2024-07-15 13:02:18.300199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:06.035 [2024-07-15 13:02:18.300208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:06.035 [2024-07-15 13:02:18.300224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:06.035 [2024-07-15 13:02:18.310247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:06.035 [2024-07-15 13:02:18.310342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.035 [2024-07-15 13:02:18.310365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4dc70 with addr=10.0.0.2, port=4420 00:18:06.035 [2024-07-15 13:02:18.310376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4dc70 is same with the state(5) to be set 00:18:06.035 [2024-07-15 13:02:18.310393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4dc70 (9): Bad file descriptor 00:18:06.035 [2024-07-15 13:02:18.310408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:06.035 [2024-07-15 13:02:18.310418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:06.035 [2024-07-15 13:02:18.310427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:06.035 [2024-07-15 13:02:18.310443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:06.035 [2024-07-15 13:02:18.320265] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:06.035 [2024-07-15 13:02:18.320299] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.035 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:06.036 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.295 13:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.282 [2024-07-15 13:02:19.669562] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:07.282 [2024-07-15 13:02:19.669798] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:07.282 [2024-07-15 13:02:19.669871] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:07.540 [2024-07-15 13:02:19.755687] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:07.540 [2024-07-15 13:02:19.816243] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:07.540 [2024-07-15 13:02:19.816508] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.540 2024/07/15 13:02:19 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:07.540 request: 00:18:07.540 { 00:18:07.540 "method": "bdev_nvme_start_discovery", 00:18:07.540 "params": { 00:18:07.540 "name": "nvme", 00:18:07.540 "trtype": "tcp", 00:18:07.540 "traddr": "10.0.0.2", 00:18:07.540 "adrfam": "ipv4", 00:18:07.540 "trsvcid": "8009", 00:18:07.540 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:07.540 "wait_for_attach": true 00:18:07.540 } 00:18:07.540 } 00:18:07.540 Got JSON-RPC error response 00:18:07.540 GoRPCClient: error on JSON-RPC call 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.540 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.540 2024/07/15 13:02:19 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:07.540 request: 00:18:07.540 { 00:18:07.541 "method": "bdev_nvme_start_discovery", 00:18:07.541 "params": { 00:18:07.541 "name": "nvme_second", 00:18:07.541 "trtype": "tcp", 00:18:07.541 "traddr": "10.0.0.2", 00:18:07.541 "adrfam": "ipv4", 00:18:07.541 "trsvcid": "8009", 00:18:07.541 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:07.541 "wait_for_attach": true 00:18:07.541 } 00:18:07.541 } 00:18:07.541 Got JSON-RPC error response 00:18:07.541 GoRPCClient: error on JSON-RPC call 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:07.541 13:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.799 13:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.732 [2024-07-15 13:02:21.097222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:08.732 [2024-07-15 13:02:21.097485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c66000 with addr=10.0.0.2, port=8010 00:18:08.732 [2024-07-15 13:02:21.097666] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:08.732 [2024-07-15 13:02:21.097889] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:08.732 [2024-07-15 13:02:21.097911] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:09.662 [2024-07-15 13:02:22.097217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:09.662 [2024-07-15 13:02:22.097296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c66000 with addr=10.0.0.2, port=8010 00:18:09.662 [2024-07-15 13:02:22.097321] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:09.662 [2024-07-15 13:02:22.097332] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:09.662 [2024-07-15 13:02:22.097342] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:11.032 [2024-07-15 13:02:23.097055] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:11.032 2024/07/15 13:02:23 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:11.032 request: 00:18:11.032 { 00:18:11.032 "method": "bdev_nvme_start_discovery", 00:18:11.032 "params": { 00:18:11.032 "name": "nvme_second", 00:18:11.032 "trtype": "tcp", 00:18:11.032 "traddr": "10.0.0.2", 00:18:11.032 "adrfam": "ipv4", 00:18:11.032 "trsvcid": "8010", 00:18:11.032 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:11.032 "wait_for_attach": false, 00:18:11.032 "attach_timeout_ms": 3000 00:18:11.032 } 00:18:11.032 } 00:18:11.032 Got JSON-RPC error response 00:18:11.032 GoRPCClient: error on JSON-RPC call 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88681 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # nvmfcleanup 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:11.032 rmmod nvme_tcp 00:18:11.032 rmmod nvme_fabrics 00:18:11.032 rmmod nvme_keyring 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@493 -- # '[' -n 88631 ']' 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@494 -- # killprocess 88631 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88631 ']' 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88631 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88631 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:11.032 killing process with pid 88631 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88631' 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88631 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88631 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@282 -- # remove_spdk_ns 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.032 13:02:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:18:11.032 00:18:11.033 real 0m10.447s 00:18:11.033 user 0m20.603s 00:18:11.033 sys 0m1.425s 00:18:11.033 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.033 ************************************ 00:18:11.033 END TEST nvmf_host_discovery 00:18:11.033 ************************************ 00:18:11.033 13:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.291 13:02:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:11.291 13:02:23 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:11.291 13:02:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:11.291 13:02:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.291 13:02:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:11.291 ************************************ 00:18:11.291 START TEST nvmf_host_multipath_status 00:18:11.291 ************************************ 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:11.291 * Looking for test storage... 00:18:11.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # prepare_net_devs 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # local -g is_hw=no 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # remove_spdk_ns 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # nvmf_veth_init 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:18:11.291 Cannot find device "nvmf_tgt_br" 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.291 Cannot find device "nvmf_tgt_br2" 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # true 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:18:11.291 Cannot find device "nvmf_tgt_br" 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:18:11.291 Cannot find device "nvmf_tgt_br2" 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:18:11.291 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:18:11.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:18:11.549 00:18:11.549 --- 10.0.0.2 ping statistics --- 00:18:11.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.549 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:18:11.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:11.549 00:18:11.549 --- 10.0.0.3 ping statistics --- 00:18:11.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.549 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:18:11.549 00:18:11.549 --- 10.0.0.1 ping statistics --- 00:18:11.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.549 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@437 -- # return 0 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:18:11.549 13:02:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # nvmfpid=89149 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # waitforlisten 89149 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89149 ']' 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.549 13:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:11.806 [2024-07-15 13:02:24.063030] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:18:11.806 [2024-07-15 13:02:24.063117] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.806 [2024-07-15 13:02:24.200305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:12.064 [2024-07-15 13:02:24.281400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.064 [2024-07-15 13:02:24.281452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.064 [2024-07-15 13:02:24.281464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.064 [2024-07-15 13:02:24.281473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.064 [2024-07-15 13:02:24.281480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.064 [2024-07-15 13:02:24.281576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.064 [2024-07-15 13:02:24.281787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.630 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.630 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:12.630 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:18:12.630 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:12.630 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:12.630 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.630 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89149 00:18:12.630 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:12.888 [2024-07-15 13:02:25.348316] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.146 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:13.405 Malloc0 00:18:13.405 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:13.663 13:02:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.921 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.180 [2024-07-15 13:02:26.391594] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.180 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:14.438 [2024-07-15 13:02:26.679784] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89259 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89259 /var/tmp/bdevperf.sock 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89259 ']' 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.438 13:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:15.384 13:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.384 13:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:15.384 13:02:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:15.642 13:02:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:15.900 Nvme0n1 00:18:16.158 13:02:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:16.415 Nvme0n1 00:18:16.415 13:02:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:16.415 13:02:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:18.314 13:02:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:18.314 13:02:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:18.573 13:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:18.831 13:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:20.204 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:20.204 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:20.204 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.204 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:20.204 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.204 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:20.204 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.204 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:20.462 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:20.463 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:20.463 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.463 13:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:20.721 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.721 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:20.721 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.721 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:20.979 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.979 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:20.979 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.979 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:21.237 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.237 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:21.237 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.237 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:21.495 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.495 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:21.495 13:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:22.060 13:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:22.318 13:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:23.252 13:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:23.252 13:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:23.252 13:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.252 13:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:23.511 13:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:23.511 13:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:23.511 13:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.511 13:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:23.769 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.769 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:23.769 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.769 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:24.027 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:24.027 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:24.027 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.027 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:24.285 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:24.285 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:24.285 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.285 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:24.544 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:24.544 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:24.544 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:24.544 13:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.811 13:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:24.811 13:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:24.811 13:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:25.070 13:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:25.329 13:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:26.701 13:02:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:26.701 13:02:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:26.701 13:02:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.701 13:02:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:26.701 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.701 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:26.701 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.701 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:26.958 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:26.958 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:26.958 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.958 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:27.214 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:27.214 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:27.214 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:27.214 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:27.473 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:27.473 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:27.473 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:27.473 13:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:28.042 13:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.042 13:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:28.042 13:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.042 13:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:28.300 13:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.300 13:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:28.300 13:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:28.558 13:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:28.817 13:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:29.757 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:29.757 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:29.757 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.757 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:30.016 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.016 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:30.016 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.016 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:30.581 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:30.581 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:30.581 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.581 13:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:30.837 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.837 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:30.837 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.837 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:31.093 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.093 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:31.093 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:31.093 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.350 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.350 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:31.350 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.350 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:31.607 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:31.607 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:31.607 13:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:31.607 13:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:31.864 13:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:33.237 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:33.237 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:33.237 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:33.237 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.237 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:33.237 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:33.237 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.237 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:33.494 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:33.494 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:33.494 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.494 13:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:33.752 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.752 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:33.752 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:33.752 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.011 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.011 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:34.011 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:34.011 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.578 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:34.578 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:34.578 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.578 13:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:34.836 13:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:34.836 13:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:34.836 13:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:35.095 13:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:35.354 13:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:36.302 13:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:36.302 13:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:36.302 13:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.302 13:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:36.560 13:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:36.560 13:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:36.560 13:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.560 13:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:36.819 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.819 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:36.819 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.819 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:37.097 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.097 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:37.097 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.097 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:37.354 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.354 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:37.354 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.354 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:37.610 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:37.610 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:37.610 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.610 13:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:37.868 13:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.868 13:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:38.125 13:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:38.125 13:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:38.382 13:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:38.640 13:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:39.576 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:39.576 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:39.576 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.576 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:40.143 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.143 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:40.143 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.143 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:40.143 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.143 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:40.143 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.143 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:40.401 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.401 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:40.401 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.401 13:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:40.660 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.660 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:40.660 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.660 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:40.918 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.918 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:40.918 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.918 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:41.176 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.176 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:41.176 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:41.433 13:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:41.691 13:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:43.061 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:43.061 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:43.061 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.061 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:43.061 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:43.062 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:43.062 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.062 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:43.319 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.319 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:43.319 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.319 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:43.577 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.577 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:43.577 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.577 13:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:43.835 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.835 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:43.835 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.835 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:44.093 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.093 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:44.093 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.093 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:44.352 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.352 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:44.352 13:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:44.917 13:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:44.917 13:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:46.290 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:46.290 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:46.290 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.290 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:46.290 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.290 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:46.290 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.290 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:46.547 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.547 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:46.547 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.547 13:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:46.805 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.805 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:46.805 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.805 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:47.371 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.371 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:47.371 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.371 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:47.371 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.371 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:47.372 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.372 13:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:47.629 13:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.629 13:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:47.629 13:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:47.888 13:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:48.453 13:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:49.407 13:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:49.407 13:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:49.407 13:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.407 13:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:49.665 13:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.665 13:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:49.665 13:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.665 13:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:49.923 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:49.923 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:49.923 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.923 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:50.181 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.181 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:50.181 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.181 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:50.439 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.439 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:50.439 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.439 13:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:50.695 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.696 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:50.696 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:50.696 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89259 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89259 ']' 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89259 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89259 00:18:50.967 killing process with pid 89259 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89259' 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89259 00:18:50.967 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89259 00:18:51.225 Connection closed with partial response: 00:18:51.225 00:18:51.225 00:18:51.225 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89259 00:18:51.225 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:51.225 [2024-07-15 13:02:26.757012] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:18:51.225 [2024-07-15 13:02:26.757171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89259 ] 00:18:51.225 [2024-07-15 13:02:26.893642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.225 [2024-07-15 13:02:26.964047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.225 Running I/O for 90 seconds... 00:18:51.225 [2024-07-15 13:02:44.045670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.046252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.046417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.046523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.046618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.046705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.046827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.046917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.047010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.047093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.047192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.047291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.047379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.047468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.047560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.047644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.047728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.047839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.047937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.048019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.048110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.048219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.048307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.048395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.048487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.048573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.048660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.048745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.048875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.048961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.049049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.049132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.049218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.049306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.049394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.049478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.049566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.049646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.049738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.049852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.049953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.050630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.225 [2024-07-15 13:02:44.050646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.051797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.051823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.051849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.051866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.051902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.051919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.051943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.051959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.051982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.051998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.225 [2024-07-15 13:02:44.052599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:51.225 [2024-07-15 13:02:44.052622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.052638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.052661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.052677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.052700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.052716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.052740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.052755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.052793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:02:44.052811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:02:44.053123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.053969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.053998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:02:44.054595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:02:44.054639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:02:44.054683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:02:44.054728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:02:44.054787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:02:44.054835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:02:44.054957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:02:44.054986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.593970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.593985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.594006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.594049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.594073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.594088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.594109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.226 [2024-07-15 13:03:00.594124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.594145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:03:00.594160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.594182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:03:00.594197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.594563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.226 [2024-07-15 13:03:00.594586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:51.226 [2024-07-15 13:03:00.594609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.594624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.594646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.594660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.594681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.594696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.594718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.594733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.594754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.594785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.594809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.594824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.594845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.594860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.594895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.594911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.594933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.594948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.594969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.594984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.595015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.595030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.595051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.595066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.595087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.595102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.595123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.595138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.595159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.595174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.595217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.595235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.596831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.596860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.596898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.596915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.596937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.596953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.596992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.597009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.597046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.597118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.597155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.597191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.597228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.227 [2024-07-15 13:03:00.597720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.597756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.597814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:51.227 [2024-07-15 13:03:00.597835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.227 [2024-07-15 13:03:00.597850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:51.227 Received shutdown signal, test time was about 34.528725 seconds 00:18:51.227 00:18:51.227 Latency(us) 00:18:51.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.227 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:51.227 Verification LBA range: start 0x0 length 0x4000 00:18:51.227 Nvme0n1 : 34.53 8407.89 32.84 0.00 0.00 15191.38 174.08 4026531.84 00:18:51.227 =================================================================================================================== 00:18:51.227 Total : 8407.89 32.84 0.00 0.00 15191.38 174.08 4026531.84 00:18:51.227 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # nvmfcleanup 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:51.483 rmmod nvme_tcp 00:18:51.483 rmmod nvme_fabrics 00:18:51.483 rmmod nvme_keyring 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # '[' -n 89149 ']' 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # killprocess 89149 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89149 ']' 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89149 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.483 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89149 00:18:51.741 killing process with pid 89149 00:18:51.741 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:51.741 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:51.741 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89149' 00:18:51.741 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89149 00:18:51.741 13:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89149 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@282 -- # remove_spdk_ns 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:18:51.741 ************************************ 00:18:51.741 END TEST nvmf_host_multipath_status 00:18:51.741 ************************************ 00:18:51.741 00:18:51.741 real 0m40.631s 00:18:51.741 user 2m13.747s 00:18:51.741 sys 0m9.643s 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:51.741 13:03:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:51.741 13:03:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:51.741 13:03:04 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:51.741 13:03:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:51.741 13:03:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:51.741 13:03:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.001 ************************************ 00:18:52.001 START TEST nvmf_discovery_remove_ifc 00:18:52.001 ************************************ 00:18:52.001 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:52.001 * Looking for test storage... 00:18:52.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:52.001 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.001 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:52.001 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.001 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.001 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.001 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # prepare_net_devs 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # local -g is_hw=no 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # remove_spdk_ns 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # nvmf_veth_init 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:18:52.002 Cannot find device "nvmf_tgt_br" 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.002 Cannot find device "nvmf_tgt_br2" 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # true 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:18:52.002 Cannot find device "nvmf_tgt_br" 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:18:52.002 Cannot find device "nvmf_tgt_br2" 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.002 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.261 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.261 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.261 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.261 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.261 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:52.261 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:52.261 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:18:52.261 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:18:52.261 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:18:52.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:18:52.262 00:18:52.262 --- 10.0.0.2 ping statistics --- 00:18:52.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.262 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:18:52.262 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:52.262 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:52.262 00:18:52.262 --- 10.0.0.3 ping statistics --- 00:18:52.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.262 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:52.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:52.262 00:18:52.262 --- 10.0.0.1 ping statistics --- 00:18:52.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.262 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@437 -- # return 0 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@485 -- # nvmfpid=90564 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@486 -- # waitforlisten 90564 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90564 ']' 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.262 13:03:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:52.521 [2024-07-15 13:03:04.729656] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:18:52.521 [2024-07-15 13:03:04.729806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.521 [2024-07-15 13:03:04.865244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.521 [2024-07-15 13:03:04.953177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.521 [2024-07-15 13:03:04.953265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.521 [2024-07-15 13:03:04.953286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.521 [2024-07-15 13:03:04.953300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.521 [2024-07-15 13:03:04.953313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.521 [2024-07-15 13:03:04.953361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:53.456 [2024-07-15 13:03:05.722715] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.456 [2024-07-15 13:03:05.730841] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:53.456 null0 00:18:53.456 [2024-07-15 13:03:05.762825] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.456 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90614 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90614 /tmp/host.sock 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90614 ']' 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.456 13:03:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:53.456 [2024-07-15 13:03:05.834315] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:18:53.456 [2024-07-15 13:03:05.834411] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90614 ] 00:18:53.715 [2024-07-15 13:03:05.965556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.715 [2024-07-15 13:03:06.053128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.649 13:03:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.583 [2024-07-15 13:03:07.946169] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:55.583 [2024-07-15 13:03:07.946207] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:55.583 [2024-07-15 13:03:07.946227] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:55.583 [2024-07-15 13:03:08.032328] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:55.843 [2024-07-15 13:03:08.089209] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:55.843 [2024-07-15 13:03:08.089314] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:55.843 [2024-07-15 13:03:08.089341] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:55.843 [2024-07-15 13:03:08.089358] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:55.843 [2024-07-15 13:03:08.089383] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:55.843 [2024-07-15 13:03:08.094434] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16c1660 was disconnected and freed. delete nvme_qpair. 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:55.843 13:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:56.782 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:56.782 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:56.782 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:56.782 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.782 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:56.782 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:56.782 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:56.782 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.041 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:57.041 13:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:57.977 13:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:58.914 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:58.914 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:58.914 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:58.914 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:58.914 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.914 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:58.914 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:58.914 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.172 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:59.172 13:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:00.107 13:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:01.041 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:01.041 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:01.041 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:01.041 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.041 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:01.041 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:01.041 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:01.041 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.299 [2024-07-15 13:03:13.517273] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:01.299 [2024-07-15 13:03:13.517344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.299 [2024-07-15 13:03:13.517361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.299 [2024-07-15 13:03:13.517376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.299 [2024-07-15 13:03:13.517385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.299 [2024-07-15 13:03:13.517395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.299 [2024-07-15 13:03:13.517405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.299 [2024-07-15 13:03:13.517415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.299 [2024-07-15 13:03:13.517424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.299 [2024-07-15 13:03:13.517435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.299 [2024-07-15 13:03:13.517444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.299 [2024-07-15 13:03:13.517453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168a920 is same with the state(5) to be set 00:19:01.299 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:01.299 13:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:01.299 [2024-07-15 13:03:13.527266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168a920 (9): Bad file descriptor 00:19:01.299 [2024-07-15 13:03:13.537289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:02.234 [2024-07-15 13:03:14.555793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:02.234 [2024-07-15 13:03:14.555872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168a920 with addr=10.0.0.2, port=4420 00:19:02.234 [2024-07-15 13:03:14.555896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168a920 is same with the state(5) to be set 00:19:02.234 [2024-07-15 13:03:14.555950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168a920 (9): Bad file descriptor 00:19:02.234 [2024-07-15 13:03:14.556461] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:02.234 [2024-07-15 13:03:14.556493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:02.234 [2024-07-15 13:03:14.556505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:02.234 [2024-07-15 13:03:14.556518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:02.234 [2024-07-15 13:03:14.556544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:02.234 [2024-07-15 13:03:14.556558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:02.234 13:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:03.172 [2024-07-15 13:03:15.556603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:03.172 [2024-07-15 13:03:15.556668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:03.172 [2024-07-15 13:03:15.556697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:03.172 [2024-07-15 13:03:15.556707] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:19:03.172 [2024-07-15 13:03:15.556730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.172 [2024-07-15 13:03:15.556759] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:03.172 [2024-07-15 13:03:15.556845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.172 [2024-07-15 13:03:15.556863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.172 [2024-07-15 13:03:15.556878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.172 [2024-07-15 13:03:15.556888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.172 [2024-07-15 13:03:15.556898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.172 [2024-07-15 13:03:15.556907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.172 [2024-07-15 13:03:15.556933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.172 [2024-07-15 13:03:15.556942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.172 [2024-07-15 13:03:15.556953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.172 [2024-07-15 13:03:15.556962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.172 [2024-07-15 13:03:15.556972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:03.172 [2024-07-15 13:03:15.557261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162d3c0 (9): Bad file descriptor 00:19:03.172 [2024-07-15 13:03:15.558272] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:03.172 [2024-07-15 13:03:15.558303] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:03.172 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:03.172 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:03.172 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:03.172 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.172 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:03.172 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:03.172 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:03.172 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:03.431 13:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:04.368 13:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:05.303 [2024-07-15 13:03:17.567982] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:05.303 [2024-07-15 13:03:17.568023] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:05.303 [2024-07-15 13:03:17.568042] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:05.303 [2024-07-15 13:03:17.654116] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:05.303 [2024-07-15 13:03:17.710263] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:05.303 [2024-07-15 13:03:17.710320] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:05.303 [2024-07-15 13:03:17.710344] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:05.303 [2024-07-15 13:03:17.710360] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:05.303 [2024-07-15 13:03:17.710370] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:05.303 [2024-07-15 13:03:17.716466] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x169f5c0 was disconnected and freed. delete nvme_qpair. 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90614 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90614 ']' 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90614 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90614 00:19:05.561 killing process with pid 90614 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90614' 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90614 00:19:05.561 13:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90614 00:19:05.561 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:05.561 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # nvmfcleanup 00:19:05.561 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:05.819 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:05.819 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:05.819 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:05.819 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:05.819 rmmod nvme_tcp 00:19:05.819 rmmod nvme_fabrics 00:19:05.819 rmmod nvme_keyring 00:19:05.819 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # '[' -n 90564 ']' 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # killprocess 90564 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90564 ']' 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90564 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90564 00:19:05.820 killing process with pid 90564 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90564' 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90564 00:19:05.820 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90564 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@282 -- # remove_spdk_ns 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:19:06.079 00:19:06.079 real 0m14.144s 00:19:06.079 user 0m25.581s 00:19:06.079 sys 0m1.507s 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.079 ************************************ 00:19:06.079 END TEST nvmf_discovery_remove_ifc 00:19:06.079 ************************************ 00:19:06.079 13:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:06.079 13:03:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:06.079 13:03:18 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:06.079 13:03:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:06.079 13:03:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.079 13:03:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.079 ************************************ 00:19:06.079 START TEST nvmf_identify_kernel_target 00:19:06.079 ************************************ 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:06.079 * Looking for test storage... 00:19:06.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.079 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # '[' '' -eq 1 ']' 00:19:06.079 /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh: line 11: [: : integer expression expected 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # nvmftestinit 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # prepare_net_devs 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # local -g is_hw=no 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # remove_spdk_ns 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.079 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmf_veth_init 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:19:06.080 Cannot find device "nvmf_tgt_br" 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.080 Cannot find device "nvmf_tgt_br2" 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # true 00:19:06.080 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:19:06.339 Cannot find device "nvmf_tgt_br" 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:19:06.339 Cannot find device "nvmf_tgt_br2" 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:06.339 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:19:06.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:06.597 00:19:06.597 --- 10.0.0.2 ping statistics --- 00:19:06.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.597 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:19:06.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:06.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:19:06.597 00:19:06.597 --- 10.0.0.3 ping statistics --- 00:19:06.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.597 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:06.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:06.597 00:19:06.597 --- 10.0.0.1 ping statistics --- 00:19:06.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.597 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # return 0 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@20 -- # get_main_ns_ip 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # local ip 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@20 -- # target_ip=10.0.0.1 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@21 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@638 -- # nvmet=/sys/kernel/config/nvmet 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@640 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@643 -- # local block nvme 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ ! -e /sys/module/nvmet ]] 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@646 -- # modprobe nvmet 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@649 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:06.597 13:03:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:06.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:06.856 Waiting for block devices as requested 00:19:06.856 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:07.115 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # is_block_zoned nvme0n1 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # block_in_use nvme0n1 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:07.115 No valid GPT data, bailing 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n1 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # is_block_zoned nvme0n2 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # block_in_use nvme0n2 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:07.115 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:07.115 No valid GPT data, bailing 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n2 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # is_block_zoned nvme0n3 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # block_in_use nvme0n3 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:07.374 No valid GPT data, bailing 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n3 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # is_block_zoned nvme1n1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # block_in_use nvme1n1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:07.374 No valid GPT data, bailing 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # nvme=/dev/nvme1n1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # [[ -b /dev/nvme1n1 ]] 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo /dev/nvme1n1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # echo 10.0.0.1 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # echo tcp 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # echo 4420 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # echo ipv4 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.1 -t tcp -s 4420 00:19:07.374 00:19:07.374 Discovery Log Number of Records 2, Generation counter 2 00:19:07.374 =====Discovery Log Entry 0====== 00:19:07.374 trtype: tcp 00:19:07.374 adrfam: ipv4 00:19:07.374 subtype: current discovery subsystem 00:19:07.374 treq: not specified, sq flow control disable supported 00:19:07.374 portid: 1 00:19:07.374 trsvcid: 4420 00:19:07.374 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:07.374 traddr: 10.0.0.1 00:19:07.374 eflags: none 00:19:07.374 sectype: none 00:19:07.374 =====Discovery Log Entry 1====== 00:19:07.374 trtype: tcp 00:19:07.374 adrfam: ipv4 00:19:07.374 subtype: nvme subsystem 00:19:07.374 treq: not specified, sq flow control disable supported 00:19:07.374 portid: 1 00:19:07.374 trsvcid: 4420 00:19:07.374 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:07.374 traddr: 10.0.0.1 00:19:07.374 eflags: none 00:19:07.374 sectype: none 00:19:07.374 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:07.374 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:07.632 ===================================================== 00:19:07.632 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:07.632 ===================================================== 00:19:07.632 Controller Capabilities/Features 00:19:07.632 ================================ 00:19:07.632 Vendor ID: 0000 00:19:07.632 Subsystem Vendor ID: 0000 00:19:07.632 Serial Number: 72d2eda6f618e74c700e 00:19:07.632 Model Number: Linux 00:19:07.632 Firmware Version: 6.7.0-68 00:19:07.632 Recommended Arb Burst: 0 00:19:07.632 IEEE OUI Identifier: 00 00 00 00:19:07.632 Multi-path I/O 00:19:07.632 May have multiple subsystem ports: No 00:19:07.632 May have multiple controllers: No 00:19:07.632 Associated with SR-IOV VF: No 00:19:07.632 Max Data Transfer Size: Unlimited 00:19:07.632 Max Number of Namespaces: 0 00:19:07.632 Max Number of I/O Queues: 1024 00:19:07.632 NVMe Specification Version (VS): 1.3 00:19:07.632 NVMe Specification Version (Identify): 1.3 00:19:07.632 Maximum Queue Entries: 1024 00:19:07.632 Contiguous Queues Required: No 00:19:07.632 Arbitration Mechanisms Supported 00:19:07.632 Weighted Round Robin: Not Supported 00:19:07.632 Vendor Specific: Not Supported 00:19:07.632 Reset Timeout: 7500 ms 00:19:07.632 Doorbell Stride: 4 bytes 00:19:07.632 NVM Subsystem Reset: Not Supported 00:19:07.632 Command Sets Supported 00:19:07.632 NVM Command Set: Supported 00:19:07.632 Boot Partition: Not Supported 00:19:07.632 Memory Page Size Minimum: 4096 bytes 00:19:07.632 Memory Page Size Maximum: 4096 bytes 00:19:07.632 Persistent Memory Region: Not Supported 00:19:07.632 Optional Asynchronous Events Supported 00:19:07.632 Namespace Attribute Notices: Not Supported 00:19:07.632 Firmware Activation Notices: Not Supported 00:19:07.632 ANA Change Notices: Not Supported 00:19:07.632 PLE Aggregate Log Change Notices: Not Supported 00:19:07.632 LBA Status Info Alert Notices: Not Supported 00:19:07.632 EGE Aggregate Log Change Notices: Not Supported 00:19:07.632 Normal NVM Subsystem Shutdown event: Not Supported 00:19:07.632 Zone Descriptor Change Notices: Not Supported 00:19:07.632 Discovery Log Change Notices: Supported 00:19:07.632 Controller Attributes 00:19:07.632 128-bit Host Identifier: Not Supported 00:19:07.632 Non-Operational Permissive Mode: Not Supported 00:19:07.632 NVM Sets: Not Supported 00:19:07.632 Read Recovery Levels: Not Supported 00:19:07.632 Endurance Groups: Not Supported 00:19:07.632 Predictable Latency Mode: Not Supported 00:19:07.632 Traffic Based Keep ALive: Not Supported 00:19:07.632 Namespace Granularity: Not Supported 00:19:07.632 SQ Associations: Not Supported 00:19:07.632 UUID List: Not Supported 00:19:07.632 Multi-Domain Subsystem: Not Supported 00:19:07.632 Fixed Capacity Management: Not Supported 00:19:07.632 Variable Capacity Management: Not Supported 00:19:07.632 Delete Endurance Group: Not Supported 00:19:07.632 Delete NVM Set: Not Supported 00:19:07.632 Extended LBA Formats Supported: Not Supported 00:19:07.632 Flexible Data Placement Supported: Not Supported 00:19:07.632 00:19:07.632 Controller Memory Buffer Support 00:19:07.632 ================================ 00:19:07.632 Supported: No 00:19:07.632 00:19:07.632 Persistent Memory Region Support 00:19:07.632 ================================ 00:19:07.632 Supported: No 00:19:07.632 00:19:07.632 Admin Command Set Attributes 00:19:07.632 ============================ 00:19:07.632 Security Send/Receive: Not Supported 00:19:07.632 Format NVM: Not Supported 00:19:07.632 Firmware Activate/Download: Not Supported 00:19:07.632 Namespace Management: Not Supported 00:19:07.632 Device Self-Test: Not Supported 00:19:07.632 Directives: Not Supported 00:19:07.632 NVMe-MI: Not Supported 00:19:07.633 Virtualization Management: Not Supported 00:19:07.633 Doorbell Buffer Config: Not Supported 00:19:07.633 Get LBA Status Capability: Not Supported 00:19:07.633 Command & Feature Lockdown Capability: Not Supported 00:19:07.633 Abort Command Limit: 1 00:19:07.633 Async Event Request Limit: 1 00:19:07.633 Number of Firmware Slots: N/A 00:19:07.633 Firmware Slot 1 Read-Only: N/A 00:19:07.633 Firmware Activation Without Reset: N/A 00:19:07.633 Multiple Update Detection Support: N/A 00:19:07.633 Firmware Update Granularity: No Information Provided 00:19:07.633 Per-Namespace SMART Log: No 00:19:07.633 Asymmetric Namespace Access Log Page: Not Supported 00:19:07.633 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:07.633 Command Effects Log Page: Not Supported 00:19:07.633 Get Log Page Extended Data: Supported 00:19:07.633 Telemetry Log Pages: Not Supported 00:19:07.633 Persistent Event Log Pages: Not Supported 00:19:07.633 Supported Log Pages Log Page: May Support 00:19:07.633 Commands Supported & Effects Log Page: Not Supported 00:19:07.633 Feature Identifiers & Effects Log Page:May Support 00:19:07.633 NVMe-MI Commands & Effects Log Page: May Support 00:19:07.633 Data Area 4 for Telemetry Log: Not Supported 00:19:07.633 Error Log Page Entries Supported: 1 00:19:07.633 Keep Alive: Not Supported 00:19:07.633 00:19:07.633 NVM Command Set Attributes 00:19:07.633 ========================== 00:19:07.633 Submission Queue Entry Size 00:19:07.633 Max: 1 00:19:07.633 Min: 1 00:19:07.633 Completion Queue Entry Size 00:19:07.633 Max: 1 00:19:07.633 Min: 1 00:19:07.633 Number of Namespaces: 0 00:19:07.633 Compare Command: Not Supported 00:19:07.633 Write Uncorrectable Command: Not Supported 00:19:07.633 Dataset Management Command: Not Supported 00:19:07.633 Write Zeroes Command: Not Supported 00:19:07.633 Set Features Save Field: Not Supported 00:19:07.633 Reservations: Not Supported 00:19:07.633 Timestamp: Not Supported 00:19:07.633 Copy: Not Supported 00:19:07.633 Volatile Write Cache: Not Present 00:19:07.633 Atomic Write Unit (Normal): 1 00:19:07.633 Atomic Write Unit (PFail): 1 00:19:07.633 Atomic Compare & Write Unit: 1 00:19:07.633 Fused Compare & Write: Not Supported 00:19:07.633 Scatter-Gather List 00:19:07.633 SGL Command Set: Supported 00:19:07.633 SGL Keyed: Not Supported 00:19:07.633 SGL Bit Bucket Descriptor: Not Supported 00:19:07.633 SGL Metadata Pointer: Not Supported 00:19:07.633 Oversized SGL: Not Supported 00:19:07.633 SGL Metadata Address: Not Supported 00:19:07.633 SGL Offset: Supported 00:19:07.633 Transport SGL Data Block: Not Supported 00:19:07.633 Replay Protected Memory Block: Not Supported 00:19:07.633 00:19:07.633 Firmware Slot Information 00:19:07.633 ========================= 00:19:07.633 Active slot: 0 00:19:07.633 00:19:07.633 00:19:07.633 Error Log 00:19:07.633 ========= 00:19:07.633 00:19:07.633 Active Namespaces 00:19:07.633 ================= 00:19:07.633 Discovery Log Page 00:19:07.633 ================== 00:19:07.633 Generation Counter: 2 00:19:07.633 Number of Records: 2 00:19:07.633 Record Format: 0 00:19:07.633 00:19:07.633 Discovery Log Entry 0 00:19:07.633 ---------------------- 00:19:07.633 Transport Type: 3 (TCP) 00:19:07.633 Address Family: 1 (IPv4) 00:19:07.633 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:07.633 Entry Flags: 00:19:07.633 Duplicate Returned Information: 0 00:19:07.633 Explicit Persistent Connection Support for Discovery: 0 00:19:07.633 Transport Requirements: 00:19:07.633 Secure Channel: Not Specified 00:19:07.633 Port ID: 1 (0x0001) 00:19:07.633 Controller ID: 65535 (0xffff) 00:19:07.633 Admin Max SQ Size: 32 00:19:07.633 Transport Service Identifier: 4420 00:19:07.633 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:07.633 Transport Address: 10.0.0.1 00:19:07.633 Discovery Log Entry 1 00:19:07.633 ---------------------- 00:19:07.633 Transport Type: 3 (TCP) 00:19:07.633 Address Family: 1 (IPv4) 00:19:07.633 Subsystem Type: 2 (NVM Subsystem) 00:19:07.633 Entry Flags: 00:19:07.633 Duplicate Returned Information: 0 00:19:07.633 Explicit Persistent Connection Support for Discovery: 0 00:19:07.633 Transport Requirements: 00:19:07.633 Secure Channel: Not Specified 00:19:07.633 Port ID: 1 (0x0001) 00:19:07.633 Controller ID: 65535 (0xffff) 00:19:07.633 Admin Max SQ Size: 32 00:19:07.633 Transport Service Identifier: 4420 00:19:07.633 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:07.633 Transport Address: 10.0.0.1 00:19:07.633 13:03:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:07.892 get_feature(0x01) failed 00:19:07.892 get_feature(0x02) failed 00:19:07.892 get_feature(0x04) failed 00:19:07.892 ===================================================== 00:19:07.892 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:07.892 ===================================================== 00:19:07.892 Controller Capabilities/Features 00:19:07.892 ================================ 00:19:07.892 Vendor ID: 0000 00:19:07.892 Subsystem Vendor ID: 0000 00:19:07.892 Serial Number: 6135bab9b7555133a54e 00:19:07.892 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:07.892 Firmware Version: 6.7.0-68 00:19:07.892 Recommended Arb Burst: 6 00:19:07.892 IEEE OUI Identifier: 00 00 00 00:19:07.892 Multi-path I/O 00:19:07.892 May have multiple subsystem ports: Yes 00:19:07.892 May have multiple controllers: Yes 00:19:07.892 Associated with SR-IOV VF: No 00:19:07.892 Max Data Transfer Size: Unlimited 00:19:07.892 Max Number of Namespaces: 1024 00:19:07.892 Max Number of I/O Queues: 128 00:19:07.892 NVMe Specification Version (VS): 1.3 00:19:07.892 NVMe Specification Version (Identify): 1.3 00:19:07.892 Maximum Queue Entries: 1024 00:19:07.892 Contiguous Queues Required: No 00:19:07.892 Arbitration Mechanisms Supported 00:19:07.892 Weighted Round Robin: Not Supported 00:19:07.892 Vendor Specific: Not Supported 00:19:07.892 Reset Timeout: 7500 ms 00:19:07.892 Doorbell Stride: 4 bytes 00:19:07.892 NVM Subsystem Reset: Not Supported 00:19:07.892 Command Sets Supported 00:19:07.892 NVM Command Set: Supported 00:19:07.892 Boot Partition: Not Supported 00:19:07.892 Memory Page Size Minimum: 4096 bytes 00:19:07.892 Memory Page Size Maximum: 4096 bytes 00:19:07.892 Persistent Memory Region: Not Supported 00:19:07.892 Optional Asynchronous Events Supported 00:19:07.892 Namespace Attribute Notices: Supported 00:19:07.892 Firmware Activation Notices: Not Supported 00:19:07.892 ANA Change Notices: Supported 00:19:07.892 PLE Aggregate Log Change Notices: Not Supported 00:19:07.892 LBA Status Info Alert Notices: Not Supported 00:19:07.892 EGE Aggregate Log Change Notices: Not Supported 00:19:07.892 Normal NVM Subsystem Shutdown event: Not Supported 00:19:07.892 Zone Descriptor Change Notices: Not Supported 00:19:07.892 Discovery Log Change Notices: Not Supported 00:19:07.892 Controller Attributes 00:19:07.892 128-bit Host Identifier: Supported 00:19:07.892 Non-Operational Permissive Mode: Not Supported 00:19:07.892 NVM Sets: Not Supported 00:19:07.892 Read Recovery Levels: Not Supported 00:19:07.892 Endurance Groups: Not Supported 00:19:07.892 Predictable Latency Mode: Not Supported 00:19:07.892 Traffic Based Keep ALive: Supported 00:19:07.892 Namespace Granularity: Not Supported 00:19:07.892 SQ Associations: Not Supported 00:19:07.892 UUID List: Not Supported 00:19:07.892 Multi-Domain Subsystem: Not Supported 00:19:07.892 Fixed Capacity Management: Not Supported 00:19:07.892 Variable Capacity Management: Not Supported 00:19:07.892 Delete Endurance Group: Not Supported 00:19:07.892 Delete NVM Set: Not Supported 00:19:07.892 Extended LBA Formats Supported: Not Supported 00:19:07.892 Flexible Data Placement Supported: Not Supported 00:19:07.892 00:19:07.892 Controller Memory Buffer Support 00:19:07.892 ================================ 00:19:07.892 Supported: No 00:19:07.892 00:19:07.892 Persistent Memory Region Support 00:19:07.892 ================================ 00:19:07.892 Supported: No 00:19:07.892 00:19:07.892 Admin Command Set Attributes 00:19:07.892 ============================ 00:19:07.892 Security Send/Receive: Not Supported 00:19:07.892 Format NVM: Not Supported 00:19:07.892 Firmware Activate/Download: Not Supported 00:19:07.892 Namespace Management: Not Supported 00:19:07.892 Device Self-Test: Not Supported 00:19:07.892 Directives: Not Supported 00:19:07.892 NVMe-MI: Not Supported 00:19:07.892 Virtualization Management: Not Supported 00:19:07.892 Doorbell Buffer Config: Not Supported 00:19:07.892 Get LBA Status Capability: Not Supported 00:19:07.892 Command & Feature Lockdown Capability: Not Supported 00:19:07.892 Abort Command Limit: 4 00:19:07.892 Async Event Request Limit: 4 00:19:07.892 Number of Firmware Slots: N/A 00:19:07.892 Firmware Slot 1 Read-Only: N/A 00:19:07.892 Firmware Activation Without Reset: N/A 00:19:07.892 Multiple Update Detection Support: N/A 00:19:07.892 Firmware Update Granularity: No Information Provided 00:19:07.892 Per-Namespace SMART Log: Yes 00:19:07.892 Asymmetric Namespace Access Log Page: Supported 00:19:07.892 ANA Transition Time : 10 sec 00:19:07.892 00:19:07.892 Asymmetric Namespace Access Capabilities 00:19:07.892 ANA Optimized State : Supported 00:19:07.892 ANA Non-Optimized State : Supported 00:19:07.892 ANA Inaccessible State : Supported 00:19:07.892 ANA Persistent Loss State : Supported 00:19:07.892 ANA Change State : Supported 00:19:07.892 ANAGRPID is not changed : No 00:19:07.892 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:07.892 00:19:07.892 ANA Group Identifier Maximum : 128 00:19:07.892 Number of ANA Group Identifiers : 128 00:19:07.892 Max Number of Allowed Namespaces : 1024 00:19:07.892 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:07.892 Command Effects Log Page: Supported 00:19:07.892 Get Log Page Extended Data: Supported 00:19:07.892 Telemetry Log Pages: Not Supported 00:19:07.892 Persistent Event Log Pages: Not Supported 00:19:07.892 Supported Log Pages Log Page: May Support 00:19:07.892 Commands Supported & Effects Log Page: Not Supported 00:19:07.892 Feature Identifiers & Effects Log Page:May Support 00:19:07.892 NVMe-MI Commands & Effects Log Page: May Support 00:19:07.892 Data Area 4 for Telemetry Log: Not Supported 00:19:07.892 Error Log Page Entries Supported: 128 00:19:07.892 Keep Alive: Supported 00:19:07.892 Keep Alive Granularity: 1000 ms 00:19:07.892 00:19:07.892 NVM Command Set Attributes 00:19:07.892 ========================== 00:19:07.892 Submission Queue Entry Size 00:19:07.892 Max: 64 00:19:07.892 Min: 64 00:19:07.892 Completion Queue Entry Size 00:19:07.892 Max: 16 00:19:07.892 Min: 16 00:19:07.892 Number of Namespaces: 1024 00:19:07.892 Compare Command: Not Supported 00:19:07.892 Write Uncorrectable Command: Not Supported 00:19:07.892 Dataset Management Command: Supported 00:19:07.892 Write Zeroes Command: Supported 00:19:07.892 Set Features Save Field: Not Supported 00:19:07.892 Reservations: Not Supported 00:19:07.892 Timestamp: Not Supported 00:19:07.892 Copy: Not Supported 00:19:07.892 Volatile Write Cache: Present 00:19:07.892 Atomic Write Unit (Normal): 1 00:19:07.892 Atomic Write Unit (PFail): 1 00:19:07.892 Atomic Compare & Write Unit: 1 00:19:07.892 Fused Compare & Write: Not Supported 00:19:07.892 Scatter-Gather List 00:19:07.892 SGL Command Set: Supported 00:19:07.892 SGL Keyed: Not Supported 00:19:07.893 SGL Bit Bucket Descriptor: Not Supported 00:19:07.893 SGL Metadata Pointer: Not Supported 00:19:07.893 Oversized SGL: Not Supported 00:19:07.893 SGL Metadata Address: Not Supported 00:19:07.893 SGL Offset: Supported 00:19:07.893 Transport SGL Data Block: Not Supported 00:19:07.893 Replay Protected Memory Block: Not Supported 00:19:07.893 00:19:07.893 Firmware Slot Information 00:19:07.893 ========================= 00:19:07.893 Active slot: 0 00:19:07.893 00:19:07.893 Asymmetric Namespace Access 00:19:07.893 =========================== 00:19:07.893 Change Count : 0 00:19:07.893 Number of ANA Group Descriptors : 1 00:19:07.893 ANA Group Descriptor : 0 00:19:07.893 ANA Group ID : 1 00:19:07.893 Number of NSID Values : 1 00:19:07.893 Change Count : 0 00:19:07.893 ANA State : 1 00:19:07.893 Namespace Identifier : 1 00:19:07.893 00:19:07.893 Commands Supported and Effects 00:19:07.893 ============================== 00:19:07.893 Admin Commands 00:19:07.893 -------------- 00:19:07.893 Get Log Page (02h): Supported 00:19:07.893 Identify (06h): Supported 00:19:07.893 Abort (08h): Supported 00:19:07.893 Set Features (09h): Supported 00:19:07.893 Get Features (0Ah): Supported 00:19:07.893 Asynchronous Event Request (0Ch): Supported 00:19:07.893 Keep Alive (18h): Supported 00:19:07.893 I/O Commands 00:19:07.893 ------------ 00:19:07.893 Flush (00h): Supported 00:19:07.893 Write (01h): Supported LBA-Change 00:19:07.893 Read (02h): Supported 00:19:07.893 Write Zeroes (08h): Supported LBA-Change 00:19:07.893 Dataset Management (09h): Supported 00:19:07.893 00:19:07.893 Error Log 00:19:07.893 ========= 00:19:07.893 Entry: 0 00:19:07.893 Error Count: 0x3 00:19:07.893 Submission Queue Id: 0x0 00:19:07.893 Command Id: 0x5 00:19:07.893 Phase Bit: 0 00:19:07.893 Status Code: 0x2 00:19:07.893 Status Code Type: 0x0 00:19:07.893 Do Not Retry: 1 00:19:07.893 Error Location: 0x28 00:19:07.893 LBA: 0x0 00:19:07.893 Namespace: 0x0 00:19:07.893 Vendor Log Page: 0x0 00:19:07.893 ----------- 00:19:07.893 Entry: 1 00:19:07.893 Error Count: 0x2 00:19:07.893 Submission Queue Id: 0x0 00:19:07.893 Command Id: 0x5 00:19:07.893 Phase Bit: 0 00:19:07.893 Status Code: 0x2 00:19:07.893 Status Code Type: 0x0 00:19:07.893 Do Not Retry: 1 00:19:07.893 Error Location: 0x28 00:19:07.893 LBA: 0x0 00:19:07.893 Namespace: 0x0 00:19:07.893 Vendor Log Page: 0x0 00:19:07.893 ----------- 00:19:07.893 Entry: 2 00:19:07.893 Error Count: 0x1 00:19:07.893 Submission Queue Id: 0x0 00:19:07.893 Command Id: 0x4 00:19:07.893 Phase Bit: 0 00:19:07.893 Status Code: 0x2 00:19:07.893 Status Code Type: 0x0 00:19:07.893 Do Not Retry: 1 00:19:07.893 Error Location: 0x28 00:19:07.893 LBA: 0x0 00:19:07.893 Namespace: 0x0 00:19:07.893 Vendor Log Page: 0x0 00:19:07.893 00:19:07.893 Number of Queues 00:19:07.893 ================ 00:19:07.893 Number of I/O Submission Queues: 128 00:19:07.893 Number of I/O Completion Queues: 128 00:19:07.893 00:19:07.893 ZNS Specific Controller Data 00:19:07.893 ============================ 00:19:07.893 Zone Append Size Limit: 0 00:19:07.893 00:19:07.893 00:19:07.893 Active Namespaces 00:19:07.893 ================= 00:19:07.893 get_feature(0x05) failed 00:19:07.893 Namespace ID:1 00:19:07.893 Command Set Identifier: NVM (00h) 00:19:07.893 Deallocate: Supported 00:19:07.893 Deallocated/Unwritten Error: Not Supported 00:19:07.893 Deallocated Read Value: Unknown 00:19:07.893 Deallocate in Write Zeroes: Not Supported 00:19:07.893 Deallocated Guard Field: 0xFFFF 00:19:07.893 Flush: Supported 00:19:07.893 Reservation: Not Supported 00:19:07.893 Namespace Sharing Capabilities: Multiple Controllers 00:19:07.893 Size (in LBAs): 1310720 (5GiB) 00:19:07.893 Capacity (in LBAs): 1310720 (5GiB) 00:19:07.893 Utilization (in LBAs): 1310720 (5GiB) 00:19:07.893 UUID: 8f5b6f94-f990-4c1f-a9be-946a1920f76e 00:19:07.893 Thin Provisioning: Not Supported 00:19:07.893 Per-NS Atomic Units: Yes 00:19:07.893 Atomic Boundary Size (Normal): 0 00:19:07.893 Atomic Boundary Size (PFail): 0 00:19:07.893 Atomic Boundary Offset: 0 00:19:07.893 NGUID/EUI64 Never Reused: No 00:19:07.893 ANA group ID: 1 00:19:07.893 Namespace Write Protected: No 00:19:07.893 Number of LBA Formats: 1 00:19:07.893 Current LBA Format: LBA Format #00 00:19:07.893 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:07.893 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # nvmfcleanup 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.893 rmmod nvme_tcp 00:19:07.893 rmmod nvme_fabrics 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@282 -- # remove_spdk_ns 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # echo 0 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # modules=(/sys/module/nvmet/holders/*) 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # modprobe -r nvmet_tcp nvmet 00:19:07.893 13:03:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:08.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:08.717 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:08.717 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:08.717 00:19:08.717 real 0m2.725s 00:19:08.717 user 0m0.953s 00:19:08.717 sys 0m1.269s 00:19:08.717 13:03:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.717 13:03:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.717 ************************************ 00:19:08.717 END TEST nvmf_identify_kernel_target 00:19:08.717 ************************************ 00:19:08.717 13:03:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:08.717 13:03:21 nvmf_tcp -- nvmf/nvmf.sh@109 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:08.717 13:03:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:08.717 13:03:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.717 13:03:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:08.717 ************************************ 00:19:08.717 START TEST nvmf_auth_host 00:19:08.717 ************************************ 00:19:08.717 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:08.976 * Looking for test storage... 00:19:08.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.976 13:03:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.977 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@452 -- # prepare_net_devs 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # local -g is_hw=no 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # remove_spdk_ns 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmf_veth_init 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:19:08.977 Cannot find device "nvmf_tgt_br" 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:19:08.977 Cannot find device "nvmf_tgt_br2" 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # true 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:19:08.977 Cannot find device "nvmf_tgt_br" 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:19:08.977 Cannot find device "nvmf_tgt_br2" 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:08.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:08.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:08.977 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:19:09.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:19:09.237 00:19:09.237 --- 10.0.0.2 ping statistics --- 00:19:09.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.237 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:19:09.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:09.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:09.237 00:19:09.237 --- 10.0.0.3 ping statistics --- 00:19:09.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.237 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:09.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:19:09.237 00:19:09.237 --- 10.0.0.1 ping statistics --- 00:19:09.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.237 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@437 -- # return 0 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:19:09.237 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@485 -- # nvmfpid=91503 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@486 -- # waitforlisten 91503 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91503 ']' 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.238 13:03:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=null 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # key=707b6f4be263f25e14c8b67a86cf8af9 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.TbI 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 707b6f4be263f25e14c8b67a86cf8af9 0 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 707b6f4be263f25e14c8b67a86cf8af9 0 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # key=707b6f4be263f25e14c8b67a86cf8af9 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=0 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:19:09.804 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.TbI 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.TbI 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.TbI 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha512 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # len=64 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # key=b904f4817808f76e76d00ea3080b7973902f2f919a76a28de65c932b9e0994d0 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.FXv 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key b904f4817808f76e76d00ea3080b7973902f2f919a76a28de65c932b9e0994d0 3 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 b904f4817808f76e76d00ea3080b7973902f2f919a76a28de65c932b9e0994d0 3 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # key=b904f4817808f76e76d00ea3080b7973902f2f919a76a28de65c932b9e0994d0 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=3 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.FXv 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.FXv 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.FXv 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=null 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # len=48 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # key=fe1a4335ff62c74eb480aceb7a70d027e0dd7c72c526a470 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.vkK 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key fe1a4335ff62c74eb480aceb7a70d027e0dd7c72c526a470 0 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 fe1a4335ff62c74eb480aceb7a70d027e0dd7c72c526a470 0 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # key=fe1a4335ff62c74eb480aceb7a70d027e0dd7c72c526a470 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=0 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.vkK 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.vkK 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.vkK 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha384 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # len=48 00:19:09.805 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:10.063 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # key=ea45ae236833589f226a4d32d8aeb804b6e0cfefbf7cd73e 00:19:10.063 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:19:10.063 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.Wow 00:19:10.063 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key ea45ae236833589f226a4d32d8aeb804b6e0cfefbf7cd73e 2 00:19:10.063 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 ea45ae236833589f226a4d32d8aeb804b6e0cfefbf7cd73e 2 00:19:10.063 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # key=ea45ae236833589f226a4d32d8aeb804b6e0cfefbf7cd73e 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=2 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.Wow 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.Wow 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Wow 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha256 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # key=7879de9e0ab8ec5ba0322619b694bbba 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.grY 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 7879de9e0ab8ec5ba0322619b694bbba 1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 7879de9e0ab8ec5ba0322619b694bbba 1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # key=7879de9e0ab8ec5ba0322619b694bbba 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.grY 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.grY 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.grY 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha256 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # key=1bbc750cfea1d79b8bd54856d726fc37 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.MsL 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 1bbc750cfea1d79b8bd54856d726fc37 1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 1bbc750cfea1d79b8bd54856d726fc37 1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # key=1bbc750cfea1d79b8bd54856d726fc37 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.MsL 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.MsL 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.MsL 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha384 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # len=48 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # key=8c10d7e06829812bb7418c6fa8ef3cd398133bab7a16c9bc 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.vR4 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 8c10d7e06829812bb7418c6fa8ef3cd398133bab7a16c9bc 2 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 8c10d7e06829812bb7418c6fa8ef3cd398133bab7a16c9bc 2 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # key=8c10d7e06829812bb7418c6fa8ef3cd398133bab7a16c9bc 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=2 00:19:10.064 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:19:10.322 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.vR4 00:19:10.322 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.vR4 00:19:10.322 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.vR4 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=null 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # key=2c6217238703863c7d9c6d71a2121cfa 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.m8d 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 2c6217238703863c7d9c6d71a2121cfa 0 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 2c6217238703863c7d9c6d71a2121cfa 0 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # key=2c6217238703863c7d9c6d71a2121cfa 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=0 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.m8d 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.m8d 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.m8d 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha512 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # len=64 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@731 -- # key=9c177155a0edf243adedad27b44df9f6aa553754087157086f31d0b14d833015 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.jF8 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 9c177155a0edf243adedad27b44df9f6aa553754087157086f31d0b14d833015 3 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 9c177155a0edf243adedad27b44df9f6aa553754087157086f31d0b14d833015 3 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # key=9c177155a0edf243adedad27b44df9f6aa553754087157086f31d0b14d833015 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=3 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.jF8 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.jF8 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.jF8 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91503 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91503 ']' 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.323 13:03:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TbI 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.FXv ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FXv 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.vkK 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Wow ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wow 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.grY 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.MsL ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MsL 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vR4 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.m8d ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.m8d 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.jF8 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@638 -- # nvmet=/sys/kernel/config/nvmet 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@640 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@643 -- # local block nvme 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ ! -e /sys/module/nvmet ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@646 -- # modprobe nvmet 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@649 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:10.897 13:03:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:11.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:11.154 Waiting for block devices as requested 00:19:11.154 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:11.411 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # is_block_zoned nvme0n1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # block_in_use nvme0n1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:11.977 No valid GPT data, bailing 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # is_block_zoned nvme0n2 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # block_in_use nvme0n2 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:11.977 No valid GPT data, bailing 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n2 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # is_block_zoned nvme0n3 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # block_in_use nvme0n3 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:11.977 No valid GPT data, bailing 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n3 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # is_block_zoned nvme1n1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # block_in_use nvme1n1 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:11.977 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:12.235 No valid GPT data, bailing 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # nvme=/dev/nvme1n1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # [[ -b /dev/nvme1n1 ]] 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@662 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@663 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@664 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo /dev/nvme1n1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@675 -- # echo 10.0.0.1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@676 -- # echo tcp 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # echo 4420 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@678 -- # echo ipv4 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@681 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.1 -t tcp -s 4420 00:19:12.235 00:19:12.235 Discovery Log Number of Records 2, Generation counter 2 00:19:12.235 =====Discovery Log Entry 0====== 00:19:12.235 trtype: tcp 00:19:12.235 adrfam: ipv4 00:19:12.235 subtype: current discovery subsystem 00:19:12.235 treq: not specified, sq flow control disable supported 00:19:12.235 portid: 1 00:19:12.235 trsvcid: 4420 00:19:12.235 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:12.235 traddr: 10.0.0.1 00:19:12.235 eflags: none 00:19:12.235 sectype: none 00:19:12.235 =====Discovery Log Entry 1====== 00:19:12.235 trtype: tcp 00:19:12.235 adrfam: ipv4 00:19:12.235 subtype: nvme subsystem 00:19:12.235 treq: not specified, sq flow control disable supported 00:19:12.235 portid: 1 00:19:12.235 trsvcid: 4420 00:19:12.235 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:12.235 traddr: 10.0.0.1 00:19:12.235 eflags: none 00:19:12.235 sectype: none 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:12.235 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.236 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:12.236 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:12.236 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:12.236 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.236 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.236 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.494 nvme0n1 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.494 nvme0n1 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.494 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.752 13:03:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:12.752 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.753 nvme0n1 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.753 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.012 nvme0n1 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.012 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.013 nvme0n1 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.013 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.270 nvme0n1 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.270 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.271 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:13.528 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:13.785 13:03:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:13.785 13:03:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.785 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.785 13:03:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.785 nvme0n1 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.785 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.786 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:13.786 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.786 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:13.786 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:13.786 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:13.786 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.786 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.786 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.044 nvme0n1 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.044 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.302 nvme0n1 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.302 nvme0n1 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.302 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.560 nvme0n1 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.560 13:03:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.560 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.493 nvme0n1 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.493 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.494 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.751 13:03:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.751 nvme0n1 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.751 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.009 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.010 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.010 nvme0n1 00:19:16.010 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.010 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.010 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.010 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.010 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.010 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.268 nvme0n1 00:19:16.268 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.526 13:03:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.784 nvme0n1 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.785 13:03:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.691 13:03:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.949 nvme0n1 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.949 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.515 nvme0n1 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.515 13:03:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.773 nvme0n1 00:19:19.773 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.773 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.773 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.773 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.773 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.773 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.773 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.773 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.774 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.342 nvme0n1 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.342 13:03:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.910 nvme0n1 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.910 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.911 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.477 nvme0n1 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.477 13:03:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.411 nvme0n1 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.411 13:03:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.977 nvme0n1 00:19:22.977 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.977 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.977 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.977 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.977 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.977 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:23.236 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.237 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:23.237 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:23.237 13:03:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:23.237 13:03:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:23.237 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.237 13:03:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.803 nvme0n1 00:19:23.803 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.803 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.803 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.803 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.803 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.803 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.804 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.740 nvme0n1 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:24.740 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.741 13:03:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.741 nvme0n1 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.741 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.000 nvme0n1 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.000 nvme0n1 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.000 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.259 nvme0n1 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.259 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.518 nvme0n1 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.518 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.830 nvme0n1 00:19:25.830 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.830 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.830 13:03:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.830 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.830 13:03:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.830 nvme0n1 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:25.830 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.831 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.089 nvme0n1 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.089 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:26.090 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.090 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:26.090 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:26.090 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:26.090 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:26.090 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.090 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.348 nvme0n1 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:26.348 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.349 nvme0n1 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.349 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.608 13:03:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.608 nvme0n1 00:19:26.608 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.608 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.608 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.608 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.608 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.867 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:26.868 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:26.868 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:26.868 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.868 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.868 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.127 nvme0n1 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.127 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.386 nvme0n1 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.386 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.645 nvme0n1 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:27.645 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.646 13:03:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.904 nvme0n1 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.904 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.471 nvme0n1 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.471 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.472 13:03:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.730 nvme0n1 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:28.730 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:28.731 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:28.731 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.731 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.731 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.295 nvme0n1 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:29.295 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.296 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.552 nvme0n1 00:19:29.552 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.552 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.552 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.552 13:03:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.552 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.552 13:03:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.809 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.067 nvme0n1 00:19:30.067 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.067 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.067 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.067 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.067 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.067 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.324 13:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.893 nvme0n1 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.893 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.459 nvme0n1 00:19:31.459 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.459 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.459 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.459 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.459 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.717 13:03:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.717 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.282 nvme0n1 00:19:32.282 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.282 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.282 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.282 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.282 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.282 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.574 13:03:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.141 nvme0n1 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.141 13:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.105 nvme0n1 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:34.105 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.106 nvme0n1 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.106 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.363 nvme0n1 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.363 nvme0n1 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:34.363 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:34.364 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:34.364 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.364 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.364 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:34.364 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:34.364 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.364 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.364 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.364 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.620 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.620 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.620 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:34.620 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.621 nvme0n1 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.621 13:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.621 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.879 nvme0n1 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.879 nvme0n1 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.879 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.137 nvme0n1 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.137 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.396 nvme0n1 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.396 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.654 nvme0n1 00:19:35.654 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.654 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.654 13:03:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.654 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.654 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.654 13:03:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.654 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.912 nvme0n1 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.912 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.171 nvme0n1 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.171 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.430 nvme0n1 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.430 13:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 nvme0n1 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:36.688 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:36.689 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:36.689 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:36.689 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.689 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.946 nvme0n1 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.946 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.204 nvme0n1 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.204 13:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.770 nvme0n1 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.770 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.771 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.028 nvme0n1 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.028 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.286 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.544 nvme0n1 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.544 13:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.109 nvme0n1 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.109 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.367 nvme0n1 00:19:39.367 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.367 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.367 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.367 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.367 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:39.625 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzA3YjZmNGJlMjYzZjI1ZTE0YzhiNjdhODZjZjhhZjlTKjiH: 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: ]] 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjkwNGY0ODE3ODA4Zjc2ZTc2ZDAwZWEzMDgwYjc5NzM5MDJmMmY5MTlhNzZhMjhkZTY1YzkzMmI5ZTA5OTRkMEQ1NrE=: 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.626 13:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.193 nvme0n1 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.193 13:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.126 nvme0n1 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.126 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzg3OWRlOWUwYWI4ZWM1YmEwMzIyNjE5YjY5NGJiYmFawr8U: 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: ]] 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWJiYzc1MGNmZWExZDc5YjhiZDU0ODU2ZDcyNmZjMzem+Him: 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.127 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.692 nvme0n1 00:19:41.692 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.692 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.692 13:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.692 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.692 13:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxMGQ3ZTA2ODI5ODEyYmI3NDE4YzZmYThlZjNjZDM5ODEzM2JhYjdhMTZjOWJj0Q0+3w==: 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: ]] 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmM2MjE3MjM4NzAzODYzYzdkOWM2ZDcxYTIxMjFjZmGp++ZS: 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.692 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.258 nvme0n1 00:19:42.258 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.258 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.258 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.258 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.258 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.258 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWMxNzcxNTVhMGVkZjI0M2FkZWRhZDI3YjQ0ZGY5ZjZhYTU1Mzc1NDA4NzE1NzA4NmYzMWQwYjE0ZDgzMzAxNXNsJWc=: 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.515 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:42.516 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:42.516 13:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:42.516 13:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:42.516 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.516 13:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.081 nvme0n1 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmUxYTQzMzVmZjYyYzc0ZWI0ODBhY2ViN2E3MGQwMjdlMGRkN2M3MmM1MjZhNDcw3lmu8A==: 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: ]] 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWE0NWFlMjM2ODMzNTg5ZjIyNmE0ZDMyZDhhZWI4MDRiNmUwY2ZlZmJmN2NkNzNl4c7RFA==: 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.081 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.081 2024/07/15 13:03:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:43.081 request: 00:19:43.081 { 00:19:43.081 "method": "bdev_nvme_attach_controller", 00:19:43.081 "params": { 00:19:43.081 "name": "nvme0", 00:19:43.081 "trtype": "tcp", 00:19:43.081 "traddr": "10.0.0.1", 00:19:43.081 "adrfam": "ipv4", 00:19:43.081 "trsvcid": "4420", 00:19:43.081 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:43.081 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:43.081 "prchk_reftag": false, 00:19:43.081 "prchk_guard": false, 00:19:43.081 "hdgst": false, 00:19:43.081 "ddgst": false 00:19:43.081 } 00:19:43.081 } 00:19:43.082 Got JSON-RPC error response 00:19:43.082 GoRPCClient: error on JSON-RPC call 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.082 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.340 2024/07/15 13:03:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:43.340 request: 00:19:43.340 { 00:19:43.340 "method": "bdev_nvme_attach_controller", 00:19:43.340 "params": { 00:19:43.340 "name": "nvme0", 00:19:43.340 "trtype": "tcp", 00:19:43.340 "traddr": "10.0.0.1", 00:19:43.340 "adrfam": "ipv4", 00:19:43.340 "trsvcid": "4420", 00:19:43.340 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:43.340 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:43.340 "prchk_reftag": false, 00:19:43.340 "prchk_guard": false, 00:19:43.340 "hdgst": false, 00:19:43.340 "ddgst": false, 00:19:43.340 "dhchap_key": "key2" 00:19:43.340 } 00:19:43.340 } 00:19:43.340 Got JSON-RPC error response 00:19:43.340 GoRPCClient: error on JSON-RPC call 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.340 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.340 2024/07/15 13:03:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:43.340 request: 00:19:43.340 { 00:19:43.340 "method": "bdev_nvme_attach_controller", 00:19:43.340 "params": { 00:19:43.340 "name": "nvme0", 00:19:43.340 "trtype": "tcp", 00:19:43.340 "traddr": "10.0.0.1", 00:19:43.340 "adrfam": "ipv4", 00:19:43.340 "trsvcid": "4420", 00:19:43.340 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:43.340 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:43.340 "prchk_reftag": false, 00:19:43.340 "prchk_guard": false, 00:19:43.340 "hdgst": false, 00:19:43.340 "ddgst": false, 00:19:43.340 "dhchap_key": "key1", 00:19:43.340 "dhchap_ctrlr_key": "ckey2" 00:19:43.340 } 00:19:43.340 } 00:19:43.341 Got JSON-RPC error response 00:19:43.341 GoRPCClient: error on JSON-RPC call 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # nvmfcleanup 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:43.341 rmmod nvme_tcp 00:19:43.341 rmmod nvme_fabrics 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@493 -- # '[' -n 91503 ']' 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@494 -- # killprocess 91503 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91503 ']' 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91503 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91503 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:43.341 killing process with pid 91503 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91503' 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91503 00:19:43.341 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91503 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@282 -- # remove_spdk_ns 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # echo 0 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@692 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:43.599 13:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:43.599 13:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:43.599 13:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:43.599 13:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@697 -- # modules=(/sys/module/nvmet/holders/*) 00:19:43.599 13:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@699 -- # modprobe -r nvmet_tcp nvmet 00:19:43.599 13:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:44.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:44.533 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:44.533 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:44.533 13:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.TbI /tmp/spdk.key-null.vkK /tmp/spdk.key-sha256.grY /tmp/spdk.key-sha384.vR4 /tmp/spdk.key-sha512.jF8 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:44.533 13:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:44.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:44.791 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:44.791 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:44.791 00:19:44.791 real 0m35.996s 00:19:44.791 user 0m32.084s 00:19:44.791 sys 0m3.350s 00:19:44.791 13:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:44.791 13:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.791 ************************************ 00:19:44.791 END TEST nvmf_auth_host 00:19:44.791 ************************************ 00:19:44.791 13:03:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:44.791 13:03:57 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:19:44.791 13:03:57 nvmf_tcp -- nvmf/nvmf.sh@112 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:44.791 13:03:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:44.791 13:03:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.791 13:03:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:44.791 ************************************ 00:19:44.791 START TEST nvmf_digest 00:19:44.791 ************************************ 00:19:44.791 13:03:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:45.049 * Looking for test storage... 00:19:45.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.049 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.050 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@452 -- # prepare_net_devs 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # local -g is_hw=no 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # remove_spdk_ns 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@436 -- # nvmf_veth_init 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:19:45.050 Cannot find device "nvmf_tgt_br" 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.050 Cannot find device "nvmf_tgt_br2" 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # true 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:19:45.050 Cannot find device "nvmf_tgt_br" 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:19:45.050 Cannot find device "nvmf_tgt_br2" 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # true 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@167 -- # true 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.050 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:19:45.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:19:45.309 00:19:45.309 --- 10.0.0.2 ping statistics --- 00:19:45.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.309 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:19:45.309 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:45.309 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:19:45.309 00:19:45.309 --- 10.0.0.3 ping statistics --- 00:19:45.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.309 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:45.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:45.309 00:19:45.309 --- 10.0.0.1 ping statistics --- 00:19:45.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.309 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@437 -- # return 0 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:45.309 ************************************ 00:19:45.309 START TEST nvmf_digest_clean 00:19:45.309 ************************************ 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@485 -- # nvmfpid=93087 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@486 -- # waitforlisten 93087 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93087 ']' 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.309 13:03:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:45.309 [2024-07-15 13:03:57.751093] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:19:45.309 [2024-07-15 13:03:57.751211] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.568 [2024-07-15 13:03:57.891976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.568 [2024-07-15 13:03:57.980969] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.568 [2024-07-15 13:03:57.981042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.568 [2024-07-15 13:03:57.981062] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.568 [2024-07-15 13:03:57.981077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.568 [2024-07-15 13:03:57.981090] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.568 [2024-07-15 13:03:57.981138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.568 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.568 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:45.568 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:19:45.568 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.568 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:45.826 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.826 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:45.826 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:45.826 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:45.826 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.826 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:45.826 null0 00:19:45.826 [2024-07-15 13:03:58.117406] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.826 [2024-07-15 13:03:58.141553] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93125 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93125 /var/tmp/bperf.sock 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93125 ']' 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.827 13:03:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:45.827 [2024-07-15 13:03:58.196688] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:19:45.827 [2024-07-15 13:03:58.196814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93125 ] 00:19:46.085 [2024-07-15 13:03:58.328932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.085 [2024-07-15 13:03:58.414952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.018 13:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.018 13:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:47.018 13:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:47.018 13:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:47.018 13:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:47.276 13:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:47.276 13:03:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:47.842 nvme0n1 00:19:47.842 13:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:47.842 13:04:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:47.842 Running I/O for 2 seconds... 00:19:50.369 00:19:50.369 Latency(us) 00:19:50.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.369 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:50.369 nvme0n1 : 2.00 17923.93 70.02 0.00 0.00 7133.04 3559.80 21328.99 00:19:50.369 =================================================================================================================== 00:19:50.369 Total : 17923.93 70.02 0.00 0.00 7133.04 3559.80 21328.99 00:19:50.369 0 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:50.369 | select(.opcode=="crc32c") 00:19:50.369 | "\(.module_name) \(.executed)"' 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93125 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93125 ']' 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93125 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93125 00:19:50.369 killing process with pid 93125 00:19:50.369 Received shutdown signal, test time was about 2.000000 seconds 00:19:50.369 00:19:50.369 Latency(us) 00:19:50.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.369 =================================================================================================================== 00:19:50.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93125' 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93125 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93125 00:19:50.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93210 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93210 /var/tmp/bperf.sock 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93210 ']' 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.369 13:04:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:50.369 [2024-07-15 13:04:02.767200] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:19:50.369 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:50.369 Zero copy mechanism will not be used. 00:19:50.369 [2024-07-15 13:04:02.767350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93210 ] 00:19:50.627 [2024-07-15 13:04:02.916086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.627 [2024-07-15 13:04:03.002299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.560 13:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.560 13:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:51.560 13:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:51.560 13:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:51.560 13:04:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:52.149 13:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:52.149 13:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:52.408 nvme0n1 00:19:52.666 13:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:52.666 13:04:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:52.666 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:52.666 Zero copy mechanism will not be used. 00:19:52.666 Running I/O for 2 seconds... 00:19:55.195 00:19:55.195 Latency(us) 00:19:55.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.195 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:55.195 nvme0n1 : 2.00 7614.57 951.82 0.00 0.00 2096.82 688.87 4706.68 00:19:55.195 =================================================================================================================== 00:19:55.195 Total : 7614.57 951.82 0.00 0.00 2096.82 688.87 4706.68 00:19:55.195 0 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:55.195 | select(.opcode=="crc32c") 00:19:55.195 | "\(.module_name) \(.executed)"' 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93210 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93210 ']' 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93210 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93210 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:55.195 killing process with pid 93210 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93210' 00:19:55.195 Received shutdown signal, test time was about 2.000000 seconds 00:19:55.195 00:19:55.195 Latency(us) 00:19:55.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.195 =================================================================================================================== 00:19:55.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93210 00:19:55.195 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93210 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93306 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93306 /var/tmp/bperf.sock 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93306 ']' 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.454 13:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:55.454 [2024-07-15 13:04:07.752616] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:19:55.454 [2024-07-15 13:04:07.752748] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93306 ] 00:19:55.454 [2024-07-15 13:04:07.897532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.713 [2024-07-15 13:04:07.957067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.648 13:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.648 13:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:56.648 13:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:56.648 13:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:56.648 13:04:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:56.905 13:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:56.905 13:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:57.163 nvme0n1 00:19:57.163 13:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:57.163 13:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:57.422 Running I/O for 2 seconds... 00:19:59.315 00:19:59.315 Latency(us) 00:19:59.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.315 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:59.315 nvme0n1 : 2.00 21111.05 82.47 0.00 0.00 6056.15 2532.07 10604.92 00:19:59.315 =================================================================================================================== 00:19:59.315 Total : 21111.05 82.47 0.00 0.00 6056.15 2532.07 10604.92 00:19:59.315 0 00:19:59.315 13:04:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:59.315 13:04:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:59.315 13:04:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:59.315 13:04:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:59.315 | select(.opcode=="crc32c") 00:19:59.315 | "\(.module_name) \(.executed)"' 00:19:59.315 13:04:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93306 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93306 ']' 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93306 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93306 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.878 killing process with pid 93306 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93306' 00:19:59.878 Received shutdown signal, test time was about 2.000000 seconds 00:19:59.878 00:19:59.878 Latency(us) 00:19:59.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.878 =================================================================================================================== 00:19:59.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.878 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93306 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93306 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93401 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93401 /var/tmp/bperf.sock 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93401 ']' 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.879 13:04:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:00.167 [2024-07-15 13:04:12.380779] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:20:00.167 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:00.167 Zero copy mechanism will not be used. 00:20:00.167 [2024-07-15 13:04:12.380933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93401 ] 00:20:00.167 [2024-07-15 13:04:12.528195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.167 [2024-07-15 13:04:12.615305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.097 13:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.097 13:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:01.098 13:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:01.098 13:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:01.098 13:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:01.663 13:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:01.663 13:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:01.921 nvme0n1 00:20:01.921 13:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:01.921 13:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:02.178 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:02.178 Zero copy mechanism will not be used. 00:20:02.178 Running I/O for 2 seconds... 00:20:04.077 00:20:04.077 Latency(us) 00:20:04.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.077 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:04.077 nvme0n1 : 2.00 6748.85 843.61 0.00 0.00 2364.41 1936.29 7089.80 00:20:04.077 =================================================================================================================== 00:20:04.077 Total : 6748.85 843.61 0.00 0.00 2364.41 1936.29 7089.80 00:20:04.077 0 00:20:04.077 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:04.077 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:04.077 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:04.077 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:04.077 | select(.opcode=="crc32c") 00:20:04.077 | "\(.module_name) \(.executed)"' 00:20:04.077 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93401 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93401 ']' 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93401 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93401 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:04.644 killing process with pid 93401 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93401' 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93401 00:20:04.644 Received shutdown signal, test time was about 2.000000 seconds 00:20:04.644 00:20:04.644 Latency(us) 00:20:04.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.644 =================================================================================================================== 00:20:04.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.644 13:04:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93401 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93087 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93087 ']' 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93087 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93087 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:04.644 killing process with pid 93087 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93087' 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93087 00:20:04.644 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93087 00:20:04.902 00:20:04.902 real 0m19.510s 00:20:04.902 user 0m39.739s 00:20:04.902 sys 0m4.620s 00:20:04.902 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:04.902 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:04.902 ************************************ 00:20:04.903 END TEST nvmf_digest_clean 00:20:04.903 ************************************ 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:04.903 ************************************ 00:20:04.903 START TEST nvmf_digest_error 00:20:04.903 ************************************ 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@485 -- # nvmfpid=93520 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@486 -- # waitforlisten 93520 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93520 ']' 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.903 13:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:04.903 [2024-07-15 13:04:17.310046] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:20:04.903 [2024-07-15 13:04:17.310168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.161 [2024-07-15 13:04:17.449808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.161 [2024-07-15 13:04:17.535722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.161 [2024-07-15 13:04:17.535819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.161 [2024-07-15 13:04:17.535842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.161 [2024-07-15 13:04:17.535858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.161 [2024-07-15 13:04:17.535870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.161 [2024-07-15 13:04:17.535909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.096 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.096 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:06.096 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:20:06.096 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:06.096 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.096 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.096 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 [2024-07-15 13:04:18.324509] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 null0 00:20:06.097 [2024-07-15 13:04:18.398684] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.097 [2024-07-15 13:04:18.422839] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93560 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93560 /var/tmp/bperf.sock 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93560 ']' 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.097 13:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 [2024-07-15 13:04:18.503426] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:20:06.097 [2024-07-15 13:04:18.503553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93560 ] 00:20:06.355 [2024-07-15 13:04:18.646097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.355 [2024-07-15 13:04:18.721369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.289 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.289 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:07.289 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:07.289 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:07.548 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:07.548 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.548 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:07.548 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.548 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.548 13:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.805 nvme0n1 00:20:07.805 13:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:07.805 13:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.805 13:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:07.805 13:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.805 13:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:07.805 13:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:08.064 Running I/O for 2 seconds... 00:20:08.064 [2024-07-15 13:04:20.307532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.307606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.307622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.322483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.322546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.322561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.338832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.338900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.338916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.353059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.353121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.353137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.367897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.367959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.367975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.381809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.381865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.381880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.394067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.394118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.394133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.406691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.406752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.406782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.420617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.420680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.420695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.435376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.435443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.435458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.449333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.449398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.449414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.464164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.464233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.464248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.476436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.476505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.476520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.491053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.491118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.491133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.506401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.506478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.506496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.064 [2024-07-15 13:04:20.520780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.064 [2024-07-15 13:04:20.520846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.064 [2024-07-15 13:04:20.520862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.535457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.535531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.535547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.550681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.550744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.550759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.563004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.563072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.563087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.576164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.576227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.576241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.591843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.591906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.591921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.604643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.604700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.604716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.618727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.618804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.618820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.631075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.631139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.631155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.647600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.647665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.647681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.661784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.661848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.661864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.674039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.674089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.674103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.691256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.691323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.691339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.702744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.702816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.702831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.720880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.720944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.720959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.735177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.735223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.735237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.747829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.747880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.747894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.763202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.763267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.763281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.777382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.777447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.777462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.323 [2024-07-15 13:04:20.789837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.323 [2024-07-15 13:04:20.789900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.323 [2024-07-15 13:04:20.789915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.582 [2024-07-15 13:04:20.805886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.582 [2024-07-15 13:04:20.805945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.582 [2024-07-15 13:04:20.805960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.582 [2024-07-15 13:04:20.820565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.582 [2024-07-15 13:04:20.820623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.582 [2024-07-15 13:04:20.820639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.582 [2024-07-15 13:04:20.833524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.582 [2024-07-15 13:04:20.833578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.582 [2024-07-15 13:04:20.833593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.582 [2024-07-15 13:04:20.847728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.582 [2024-07-15 13:04:20.847807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.582 [2024-07-15 13:04:20.847823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.582 [2024-07-15 13:04:20.862070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.582 [2024-07-15 13:04:20.862129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.582 [2024-07-15 13:04:20.862144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.582 [2024-07-15 13:04:20.876306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.582 [2024-07-15 13:04:20.876363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.582 [2024-07-15 13:04:20.876379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.582 [2024-07-15 13:04:20.890177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.582 [2024-07-15 13:04:20.890232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.582 [2024-07-15 13:04:20.890247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.582 [2024-07-15 13:04:20.904874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.582 [2024-07-15 13:04:20.904933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.582 [2024-07-15 13:04:20.904949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.582 [2024-07-15 13:04:20.918963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.582 [2024-07-15 13:04:20.919022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.582 [2024-07-15 13:04:20.919037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.583 [2024-07-15 13:04:20.934323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.583 [2024-07-15 13:04:20.934382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.583 [2024-07-15 13:04:20.934397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.583 [2024-07-15 13:04:20.949507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.583 [2024-07-15 13:04:20.949571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.583 [2024-07-15 13:04:20.949586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.583 [2024-07-15 13:04:20.959819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.583 [2024-07-15 13:04:20.959875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.583 [2024-07-15 13:04:20.959890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.583 [2024-07-15 13:04:20.975509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.583 [2024-07-15 13:04:20.975571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.583 [2024-07-15 13:04:20.975587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.583 [2024-07-15 13:04:20.989648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.583 [2024-07-15 13:04:20.989709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.583 [2024-07-15 13:04:20.989725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.583 [2024-07-15 13:04:21.001623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.583 [2024-07-15 13:04:21.001680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.583 [2024-07-15 13:04:21.001695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.583 [2024-07-15 13:04:21.018405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.583 [2024-07-15 13:04:21.018461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.583 [2024-07-15 13:04:21.018476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.583 [2024-07-15 13:04:21.033476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.583 [2024-07-15 13:04:21.033540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.583 [2024-07-15 13:04:21.033556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.583 [2024-07-15 13:04:21.047547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.583 [2024-07-15 13:04:21.047606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.583 [2024-07-15 13:04:21.047622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.060613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.060676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.060692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.075922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.075989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.076005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.090543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.090609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.090625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.105392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.105465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.105481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.117637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.117704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.117720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.132004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.132081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.132098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.146255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.146317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.146334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.161711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.161788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.161806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.175890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.175964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.175989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.193970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.194067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.194095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.213653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.213758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.213819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.231261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.231363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.231391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.246979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.247081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.247110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.262735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.262851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.262879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.280221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.280325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.280352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.293463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.293584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.293616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.842 [2024-07-15 13:04:21.306534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:08.842 [2024-07-15 13:04:21.306646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.842 [2024-07-15 13:04:21.306675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.324503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.324615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.324644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.340361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.340463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.340490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.357005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.357105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.357132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.373033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.373139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.373171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.386868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.386971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.387000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.403159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.403275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.403306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.418652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.418757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.418819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.436006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.436087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.436104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.448314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.448387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.448404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.461867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.461943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.461959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.477497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.477574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.477591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.492568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.492641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.492657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.506601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.506678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.506695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.521705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.521795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.521813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.534477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.534557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.534573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.551124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.551203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.551220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.101 [2024-07-15 13:04:21.565412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.101 [2024-07-15 13:04:21.565485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.101 [2024-07-15 13:04:21.565502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.580134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.580209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.580226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.595005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.595082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.595098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.609253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.609328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.609343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.621883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.621967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.621985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.638135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.638209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.638226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.651031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.651104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.651120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.664913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.664986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.665003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.679219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.679305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.679321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.693379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.693452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.693468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.708139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.708215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.708232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.722625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.722695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.722712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.360 [2024-07-15 13:04:21.734756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.360 [2024-07-15 13:04:21.734842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.360 [2024-07-15 13:04:21.734858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.361 [2024-07-15 13:04:21.749629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.361 [2024-07-15 13:04:21.749701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.361 [2024-07-15 13:04:21.749717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.361 [2024-07-15 13:04:21.762860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.361 [2024-07-15 13:04:21.762936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.361 [2024-07-15 13:04:21.762952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.361 [2024-07-15 13:04:21.779991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.361 [2024-07-15 13:04:21.780064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.361 [2024-07-15 13:04:21.780081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.361 [2024-07-15 13:04:21.794232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.361 [2024-07-15 13:04:21.794305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.361 [2024-07-15 13:04:21.794322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.361 [2024-07-15 13:04:21.809431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.361 [2024-07-15 13:04:21.809507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.361 [2024-07-15 13:04:21.809524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.361 [2024-07-15 13:04:21.824746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.361 [2024-07-15 13:04:21.824841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.361 [2024-07-15 13:04:21.824859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.840429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.840517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.840536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.855377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.855459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.855476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.870024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.870103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.870120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.884429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.884501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.884517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.899351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.899428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.899444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.914464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.914538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.914554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.928400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.928484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.928501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.942198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.942266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.942283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.956420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.956504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.956522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.971870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.971943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.620 [2024-07-15 13:04:21.971960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.620 [2024-07-15 13:04:21.987574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.620 [2024-07-15 13:04:21.987648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.621 [2024-07-15 13:04:21.987665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.621 [2024-07-15 13:04:22.001957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.621 [2024-07-15 13:04:22.002034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.621 [2024-07-15 13:04:22.002050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.621 [2024-07-15 13:04:22.015662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.621 [2024-07-15 13:04:22.015742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.621 [2024-07-15 13:04:22.015757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.621 [2024-07-15 13:04:22.030879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.621 [2024-07-15 13:04:22.030971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.621 [2024-07-15 13:04:22.030990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.621 [2024-07-15 13:04:22.046819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.621 [2024-07-15 13:04:22.046907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.621 [2024-07-15 13:04:22.046931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.621 [2024-07-15 13:04:22.061640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.621 [2024-07-15 13:04:22.061711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.621 [2024-07-15 13:04:22.061727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.621 [2024-07-15 13:04:22.076313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.621 [2024-07-15 13:04:22.076379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.621 [2024-07-15 13:04:22.076395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.879 [2024-07-15 13:04:22.088245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.879 [2024-07-15 13:04:22.088324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.879 [2024-07-15 13:04:22.088341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.879 [2024-07-15 13:04:22.104217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.879 [2024-07-15 13:04:22.104289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.879 [2024-07-15 13:04:22.104305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.879 [2024-07-15 13:04:22.119760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.879 [2024-07-15 13:04:22.119842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.879 [2024-07-15 13:04:22.119859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.879 [2024-07-15 13:04:22.133423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.879 [2024-07-15 13:04:22.133495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.879 [2024-07-15 13:04:22.133511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.879 [2024-07-15 13:04:22.148652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.879 [2024-07-15 13:04:22.148918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.879 [2024-07-15 13:04:22.148939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.880 [2024-07-15 13:04:22.162578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.880 [2024-07-15 13:04:22.162839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.880 [2024-07-15 13:04:22.163083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.880 [2024-07-15 13:04:22.179524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.880 [2024-07-15 13:04:22.179829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.880 [2024-07-15 13:04:22.179858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.880 [2024-07-15 13:04:22.196899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.880 [2024-07-15 13:04:22.196974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.880 [2024-07-15 13:04:22.196998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.880 [2024-07-15 13:04:22.212727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.880 [2024-07-15 13:04:22.212830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.880 [2024-07-15 13:04:22.212855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.880 [2024-07-15 13:04:22.229324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.880 [2024-07-15 13:04:22.229412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.880 [2024-07-15 13:04:22.229437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.880 [2024-07-15 13:04:22.244753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.880 [2024-07-15 13:04:22.244863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.880 [2024-07-15 13:04:22.244890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.880 [2024-07-15 13:04:22.261174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.880 [2024-07-15 13:04:22.261248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.880 [2024-07-15 13:04:22.261265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.880 [2024-07-15 13:04:22.275975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c053e0) 00:20:09.880 [2024-07-15 13:04:22.276046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.880 [2024-07-15 13:04:22.276065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.880 00:20:09.880 Latency(us) 00:20:09.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.880 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:09.880 nvme0n1 : 2.00 17305.43 67.60 0.00 0.00 7387.91 3708.74 23831.27 00:20:09.880 =================================================================================================================== 00:20:09.880 Total : 17305.43 67.60 0.00 0.00 7387.91 3708.74 23831.27 00:20:09.880 0 00:20:09.880 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:09.880 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:09.880 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:09.880 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:09.880 | .driver_specific 00:20:09.880 | .nvme_error 00:20:09.880 | .status_code 00:20:09.880 | .command_transient_transport_error' 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93560 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93560 ']' 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93560 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93560 00:20:10.445 killing process with pid 93560 00:20:10.445 Received shutdown signal, test time was about 2.000000 seconds 00:20:10.445 00:20:10.445 Latency(us) 00:20:10.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.445 =================================================================================================================== 00:20:10.445 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93560' 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93560 00:20:10.445 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93560 00:20:10.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93650 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93650 /var/tmp/bperf.sock 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93650 ']' 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.703 13:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:10.703 [2024-07-15 13:04:22.989534] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:20:10.703 [2024-07-15 13:04:22.990005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93650 ] 00:20:10.703 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:10.703 Zero copy mechanism will not be used. 00:20:10.703 [2024-07-15 13:04:23.134414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.961 [2024-07-15 13:04:23.214861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.907 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.907 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:11.907 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:11.907 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:12.191 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:12.191 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.191 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:12.191 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.191 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:12.191 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:12.449 nvme0n1 00:20:12.449 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:12.449 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.449 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:12.449 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.449 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:12.449 13:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:12.708 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:12.708 Zero copy mechanism will not be used. 00:20:12.708 Running I/O for 2 seconds... 00:20:12.708 [2024-07-15 13:04:24.957378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.708 [2024-07-15 13:04:24.957450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.708 [2024-07-15 13:04:24.957467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.708 [2024-07-15 13:04:24.961907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.708 [2024-07-15 13:04:24.961953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.708 [2024-07-15 13:04:24.961968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.708 [2024-07-15 13:04:24.967195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.708 [2024-07-15 13:04:24.967257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.708 [2024-07-15 13:04:24.967275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.708 [2024-07-15 13:04:24.971996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.708 [2024-07-15 13:04:24.972043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.708 [2024-07-15 13:04:24.972059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.708 [2024-07-15 13:04:24.975333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.708 [2024-07-15 13:04:24.975375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.708 [2024-07-15 13:04:24.975389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.708 [2024-07-15 13:04:24.979977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.708 [2024-07-15 13:04:24.980024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.708 [2024-07-15 13:04:24.980039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.708 [2024-07-15 13:04:24.985116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.708 [2024-07-15 13:04:24.985167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.708 [2024-07-15 13:04:24.985182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.708 [2024-07-15 13:04:24.989546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:24.989592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:24.989608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:24.993051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:24.993099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:24.993113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:24.998181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:24.998233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:24.998248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.002078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.002126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.002142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.006649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.006699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.006714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.011718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.011785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.011802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.014582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.014623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.014638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.019479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.019526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.019541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.023299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.023344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.023359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.027296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.027341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.027357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.032709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.032775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.032793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.036953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.036997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.037012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.039836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.039879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.039894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.045617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.045670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.045686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.050198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.050251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.050267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.054638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.054700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.054718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.058355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.058413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.058429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.062580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.062635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.062651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.067039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.067092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.067107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.071003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.071059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.071074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.074916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.074963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.074978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.079374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.079423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.079439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.709 [2024-07-15 13:04:25.083527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.709 [2024-07-15 13:04:25.083575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.709 [2024-07-15 13:04:25.083590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.087138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.087182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.087196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.092151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.092200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.092215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.097255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.097303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.097318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.100849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.100890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.100904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.105032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.105077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.105092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.109105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.109148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.109163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.112739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.112795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.112810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.117385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.117433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.117448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.122460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.122506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.122524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.127395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.127440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.127456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.130308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.130350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.130364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.134496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.134542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.134557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.139382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.139427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.139442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.142853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.142894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.142908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.147498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.147545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.147559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.151120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.151163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.151178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.155453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.155505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.155520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.159616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.159661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.159676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.163995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.164041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.164056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.167122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.167165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.167181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.170970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.171014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.710 [2024-07-15 13:04:25.171029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.710 [2024-07-15 13:04:25.175543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.710 [2024-07-15 13:04:25.175619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.970 [2024-07-15 13:04:25.175647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.970 [2024-07-15 13:04:25.179667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.970 [2024-07-15 13:04:25.179723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.970 [2024-07-15 13:04:25.179747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.970 [2024-07-15 13:04:25.184429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.970 [2024-07-15 13:04:25.184496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.970 [2024-07-15 13:04:25.184525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.970 [2024-07-15 13:04:25.188741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.970 [2024-07-15 13:04:25.188815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.970 [2024-07-15 13:04:25.188840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.970 [2024-07-15 13:04:25.192578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.970 [2024-07-15 13:04:25.192646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.970 [2024-07-15 13:04:25.192671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.970 [2024-07-15 13:04:25.197328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.970 [2024-07-15 13:04:25.197383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.970 [2024-07-15 13:04:25.197399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.970 [2024-07-15 13:04:25.200897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.970 [2024-07-15 13:04:25.200944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.970 [2024-07-15 13:04:25.200959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.970 [2024-07-15 13:04:25.205232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.970 [2024-07-15 13:04:25.205293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.970 [2024-07-15 13:04:25.205309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.970 [2024-07-15 13:04:25.209008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.970 [2024-07-15 13:04:25.209058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.970 [2024-07-15 13:04:25.209073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.970 [2024-07-15 13:04:25.212817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.970 [2024-07-15 13:04:25.212868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.212884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.216573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.216629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.216652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.220517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.220580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.220596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.225017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.225072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.225087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.228390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.228436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.228451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.232562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.232615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.232630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.236541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.236619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.236643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.241809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.241864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.241880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.247890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.247964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.247989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.251708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.251759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.251808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.256824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.256876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.256903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.261817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.261882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.261898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.266949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.267038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.267054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.270644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.270696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.270712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.275123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.275176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.275191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.279305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.279353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.279379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.283542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.283595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.283610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.287362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.287408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.287423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.291164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.291210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.291224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.294706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.294751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.294786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.299000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.299047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.299061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.302518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.302567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.302581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.307006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.307051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.307067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.311327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.311375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.311390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.315616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.315664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.315680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.319368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.319414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.319429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.324073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.324127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.324142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.328299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.971 [2024-07-15 13:04:25.328352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.971 [2024-07-15 13:04:25.328368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.971 [2024-07-15 13:04:25.332211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.332271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.332286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.337346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.337398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.337414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.341608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.341659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.341674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.345578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.345626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.345642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.350383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.350433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.350448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.354350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.354398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.354414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.358011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.358057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.358072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.362298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.362361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.362377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.366105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.366155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.366171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.370100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.370151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.370167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.374578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.374625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.374641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.378037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.378085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.378100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.383000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.383050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.383065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.388123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.388185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.388201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.392304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.392353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.392368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.395682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.395726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.395741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.400188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.400235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.400251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.403980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.404027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.404042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.407899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.407947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.407964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.413026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.413079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.413094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.417985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.418038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.418054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.423115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.423170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.423186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.426408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.426464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.426480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.431859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.431918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.431933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.972 [2024-07-15 13:04:25.437069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:12.972 [2024-07-15 13:04:25.437131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.972 [2024-07-15 13:04:25.437147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.441852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.441903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.441918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.445452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.445502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.445517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.451027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.451078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.451094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.455547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.455593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.455608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.458673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.458717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.458732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.463510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.463563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.463578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.467817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.467866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.467882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.471446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.471496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.471511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.475530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.475598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.475615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.480402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.480463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.480479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.483815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.483874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.483890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.487430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.487485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.487500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.491687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.491748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.491777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.496210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.496265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.496281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.501162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.501221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.501237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.505126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.505176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.505191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.509419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.509472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.509487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.513504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.513554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.513570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.518092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.518145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.518160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.522820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.522873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.522889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.526017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.526062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.526077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.530070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.530123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.231 [2024-07-15 13:04:25.530140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.231 [2024-07-15 13:04:25.533995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.231 [2024-07-15 13:04:25.534043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.534058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.537647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.537700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.537715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.541942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.541994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.546493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.546544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.546559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.549380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.549427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.549442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.554575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.554626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.554641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.557977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.558023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.558038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.562652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.562705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.562720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.567071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.567122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.567138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.571176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.571226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.571251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.575458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.575508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.575523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.579443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.579491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.579507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.583692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.583743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.583758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.588251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.588301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.588317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.591799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.591844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.591859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.596226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.596276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.596293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.600695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.600749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.600777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.604698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.604748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.604778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.607958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.608001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.608016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.612050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.612102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.612117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.616307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.616359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.616375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.620086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.620135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.232 [2024-07-15 13:04:25.620150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.232 [2024-07-15 13:04:25.624103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.232 [2024-07-15 13:04:25.624150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.624166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.628305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.628357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.628372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.632118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.632166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.632182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.636709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.636774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.636791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.640113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.640164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.640179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.644417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.644468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.644484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.647872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.647920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.647935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.652295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.652346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.652362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.657401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.657449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.657465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.662030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.662078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.662094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.665719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.665781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.665798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.669866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.669916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.669931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.673900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.673951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.673966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.678681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.678735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.678751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.683432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.683480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.683496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.687281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.687332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.687359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.692281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.692334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.692350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.233 [2024-07-15 13:04:25.695548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.233 [2024-07-15 13:04:25.695593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.233 [2024-07-15 13:04:25.695608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.492 [2024-07-15 13:04:25.699876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.492 [2024-07-15 13:04:25.699923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.492 [2024-07-15 13:04:25.699938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.704333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.704387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.704403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.708698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.708750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.708780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.712354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.712402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.712418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.717330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.717386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.717401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.721586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.721643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.721659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.725053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.725107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.725137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.730187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.730259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.730275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.734819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.734875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.734891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.738230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.738281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.738297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.742427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.742478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.742494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.746599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.746652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.746668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.751222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.751284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.751299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.755678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.755725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.755740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.759602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.759656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.759672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.763891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.763936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.763952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.768238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.768288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.768303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.771425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.771471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.771486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.776575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.776640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.776658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.781398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.781454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.781469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.784921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.784965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.784979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.789103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.789150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.789165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.793061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.793107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.793122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.796497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.796540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.796554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.801130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.801176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.801191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.805137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.805188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.805202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.809208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.809254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.809269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.813685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.813730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.813746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.817444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.817487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.817503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.822066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.822115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.822131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.825192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.825236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.825251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.830072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.830121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.830142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.835180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.835227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.835251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.838588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.838629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.838643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.842812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.842853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.842867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.847143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.847185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.847200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.851146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.851188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.851202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.855169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.855212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.855226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.859115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.859157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.859172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.863305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.863347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.863362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.867437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.493 [2024-07-15 13:04:25.867479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.493 [2024-07-15 13:04:25.867493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.493 [2024-07-15 13:04:25.870921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.870963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.870978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.875487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.875530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.875545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.879464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.879506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.879520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.883620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.883662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.883677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.886990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.887032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.887046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.891513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.891555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.891569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.895926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.895969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.895983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.901380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.901451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.901468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.905852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.905902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.905917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.909082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.909127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.909142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.913895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.913942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.913957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.918714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.918758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.918789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.922400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.922441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.922455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.926387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.926432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.926447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.930719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.930775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.930792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.935343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.935390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.935404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.938617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.938658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.938673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.943169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.943256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.943273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.946967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.947034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.947050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.951286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.951355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.951371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.955810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.955882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.955899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.494 [2024-07-15 13:04:25.959290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.494 [2024-07-15 13:04:25.959360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.494 [2024-07-15 13:04:25.959376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:25.963775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:25.963840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.754 [2024-07-15 13:04:25.963856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:25.968179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:25.968248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.754 [2024-07-15 13:04:25.968263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:25.972451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:25.972527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.754 [2024-07-15 13:04:25.972544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:25.976543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:25.976614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.754 [2024-07-15 13:04:25.976631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:25.981128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:25.981201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.754 [2024-07-15 13:04:25.981217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:25.985197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:25.985268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.754 [2024-07-15 13:04:25.985283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:25.988870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:25.988936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.754 [2024-07-15 13:04:25.988951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:25.993524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:25.993601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.754 [2024-07-15 13:04:25.993617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:25.998367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:25.998425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.754 [2024-07-15 13:04:25.998441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.754 [2024-07-15 13:04:26.002744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.754 [2024-07-15 13:04:26.002811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.002827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.006565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.006613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.006628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.010848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.010897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.010912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.015544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.015593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.015608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.019884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.019950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.019965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.024355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.024430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.024445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.028027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.028094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.028110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.032685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.032736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.032752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.036800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.036849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.036865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.040806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.040851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.040866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.046781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.046865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.046893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.053274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.053371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.053399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.057142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.057205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.057222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.062566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.062629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.062646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.066844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.066904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.066920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.070978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.071028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.071043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.075424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.075472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.075487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.080696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.080754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.080785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.084186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.084234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.084249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.088441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.088491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.088506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.092411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.092464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.092480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.096812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.096880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.096898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.102152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.102205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.102222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.106616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.106673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.106690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.111661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.111713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.111729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.116411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.116459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.116475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.120230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.120280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.120296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.124853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.124906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.124930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.130603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.130651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.755 [2024-07-15 13:04:26.130667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.755 [2024-07-15 13:04:26.135154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.755 [2024-07-15 13:04:26.135200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.135215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.139009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.139051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.139066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.142590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.142639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.142654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.147963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.148019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.148035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.153418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.153471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.153488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.156820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.156867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.156882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.161080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.161123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.161137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.166283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.166328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.166342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.170899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.170943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.170958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.174675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.174728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.174751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.177932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.177977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.177992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.182696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.182744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.182760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.187924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.187973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.187988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.191288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.191329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.191344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.195476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.195519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.195534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.200313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.200355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.200371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.203555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.203600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.203615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.208432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.208503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.208520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.212644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.212705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.212721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.216339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.216384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.216399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:13.756 [2024-07-15 13:04:26.221030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:13.756 [2024-07-15 13:04:26.221099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.756 [2024-07-15 13:04:26.221115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.225062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.225135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.225151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.229493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.229573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.229590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.234152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.234231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.234248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.237818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.237905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.237922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.242920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.242999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.243016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.247844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.247922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.247938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.251697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.251786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.251804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.256098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.256165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.256181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.260185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.260236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.260252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.264614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.264676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.264693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.269856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.269916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.269932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.275393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.275465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.275491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.281159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.281226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.281242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.286376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.286427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.286443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.289235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.289275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.289290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.294727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.294791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.294807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.298933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.298977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.298992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.302463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.302512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.302526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.306935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.307007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.307024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.311277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.311347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.311363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.315422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.315482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.315498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.319548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.319592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.319607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.324172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.324218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.324233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.016 [2024-07-15 13:04:26.328122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.016 [2024-07-15 13:04:26.328168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.016 [2024-07-15 13:04:26.328182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.332451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.332497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.332512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.336084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.336145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.336160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.340670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.340744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.340760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.344546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.344603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.344619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.348607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.348652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.348667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.353123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.353170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.353184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.357502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.357551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.357567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.361501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.361547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.361562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.365698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.365743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.365758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.369284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.369326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.369340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.373458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.373502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.373517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.377068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.377114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.377128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.381629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.381677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.381692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.385863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.385918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.385932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.390205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.390249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.390263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.394444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.394487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.394502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.398026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.398067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.398082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.402507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.402554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.402569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.406151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.406196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.406210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.410338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.410400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.410416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.414324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.414376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.414391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.418478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.418534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.418549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.421976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.422028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.422043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.426806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.426864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.426879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.430711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.430782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.430799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.435334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.435391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.435408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.439949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.440001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.440016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.444119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.444174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.444191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.447813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.017 [2024-07-15 13:04:26.447863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.017 [2024-07-15 13:04:26.447879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.017 [2024-07-15 13:04:26.452845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.018 [2024-07-15 13:04:26.452899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.018 [2024-07-15 13:04:26.452915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.018 [2024-07-15 13:04:26.457274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.018 [2024-07-15 13:04:26.457325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.018 [2024-07-15 13:04:26.457341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.018 [2024-07-15 13:04:26.461487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.018 [2024-07-15 13:04:26.461540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.018 [2024-07-15 13:04:26.461556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.018 [2024-07-15 13:04:26.465548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.018 [2024-07-15 13:04:26.465611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.018 [2024-07-15 13:04:26.465634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.018 [2024-07-15 13:04:26.470285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.018 [2024-07-15 13:04:26.470338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.018 [2024-07-15 13:04:26.470354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.018 [2024-07-15 13:04:26.474260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.018 [2024-07-15 13:04:26.474328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.018 [2024-07-15 13:04:26.474355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.018 [2024-07-15 13:04:26.479420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.018 [2024-07-15 13:04:26.479471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.018 [2024-07-15 13:04:26.479486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.278 [2024-07-15 13:04:26.484025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.278 [2024-07-15 13:04:26.484074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.278 [2024-07-15 13:04:26.484090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.278 [2024-07-15 13:04:26.488220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.278 [2024-07-15 13:04:26.488267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.488283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.492024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.492068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.492084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.496427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.496471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.496486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.500783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.500831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.500847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.505101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.505148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.505163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.508374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.508420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.508435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.512526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.512573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.512588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.516425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.516474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.516489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.519696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.519741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.519756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.524275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.524324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.524339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.529162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.529211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.529227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.534155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.534206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.534223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.537934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.537986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.538003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.542265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.542314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.542330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.547508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.547566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.547582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.552722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.552797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.552814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.555811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.555858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.555873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.560173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.560226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.560242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.564934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.564982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.564997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.568899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.568948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.568963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.572829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.572877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.572893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.577099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.577151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.577167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.581199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.581252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.581269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.585136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.585183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.585199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.588980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.589026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.589041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.592739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.592809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.592825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.596794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.596839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.596855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.602191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.602243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.602258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.605583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.605637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.279 [2024-07-15 13:04:26.605653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.279 [2024-07-15 13:04:26.610199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.279 [2024-07-15 13:04:26.610258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.610274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.615073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.615128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.615144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.618388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.618438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.618453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.622706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.622757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.622787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.627872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.627927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.627943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.631212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.631269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.631286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.635425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.635477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.635493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.639679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.639730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.639744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.644169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.644219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.644235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.647717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.647777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.647795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.652109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.652156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.652171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.655832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.655876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.655891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.659699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.659754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.659786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.663580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.663637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.663659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.668452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.668507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.668522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.671932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.671973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.671987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.676247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.676290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.676305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.681069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.681115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.681130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.685818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.685860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.685875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.689092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.689134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.689149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.693232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.693274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.693289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.698095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.698139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.698154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.701587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.701629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.701644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.706290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.706344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.706359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.709594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.709638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.709653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.713826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.713874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.713890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.718686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.718734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.718750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.722609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.722653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.722668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.280 [2024-07-15 13:04:26.726520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.280 [2024-07-15 13:04:26.726582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.280 [2024-07-15 13:04:26.726605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.281 [2024-07-15 13:04:26.733435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.281 [2024-07-15 13:04:26.733511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.281 [2024-07-15 13:04:26.733537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.281 [2024-07-15 13:04:26.738501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.281 [2024-07-15 13:04:26.738560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.281 [2024-07-15 13:04:26.738577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.281 [2024-07-15 13:04:26.743578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.281 [2024-07-15 13:04:26.743648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.281 [2024-07-15 13:04:26.743675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.747080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.747130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.747146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.751600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.751653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.751668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.755647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.755696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.755712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.760195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.760244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.760260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.765453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.765504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.765521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.769747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.769825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.769848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.775509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.775560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.775575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.779004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.779047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.779062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.783507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.783553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.783569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.787824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.787869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.787884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.792127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.792170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.792185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.796591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.796648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.796666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.800724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.800779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.800796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.804492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.804536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.804551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.809848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.809920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.809945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.815783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.815842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.815858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.820037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.820085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.820100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.824565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.824620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.824637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.829073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.540 [2024-07-15 13:04:26.829122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.540 [2024-07-15 13:04:26.829138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.540 [2024-07-15 13:04:26.834757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.834824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.834840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.839229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.839319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.839345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.845601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.845697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.845725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.851542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.851643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.851668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.858754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.858871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.858896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.865387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.865482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.865507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.870679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.870749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.870791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.875756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.875833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.875859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.881464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.881523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.881539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.887549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.887625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.887649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.892000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.892053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.892068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.898301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.898375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.898397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.906076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.906152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.906179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.913049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.913120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.913149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.918166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.918233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.918259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.925239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.925313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.925340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.932805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.932875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.932900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.940314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.940385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.940412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:14.541 [2024-07-15 13:04:26.947828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1288380) 00:20:14.541 [2024-07-15 13:04:26.947892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.541 [2024-07-15 13:04:26.947917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:14.541 00:20:14.541 Latency(us) 00:20:14.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.541 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:14.541 nvme0n1 : 2.00 7079.58 884.95 0.00 0.00 2255.51 662.81 10247.45 00:20:14.541 =================================================================================================================== 00:20:14.541 Total : 7079.58 884.95 0.00 0.00 2255.51 662.81 10247.45 00:20:14.541 0 00:20:14.541 13:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:14.541 13:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:14.541 13:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:14.541 13:04:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:14.541 | .driver_specific 00:20:14.541 | .nvme_error 00:20:14.541 | .status_code 00:20:14.541 | .command_transient_transport_error' 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 457 > 0 )) 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93650 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93650 ']' 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93650 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93650 00:20:14.799 killing process with pid 93650 00:20:14.799 Received shutdown signal, test time was about 2.000000 seconds 00:20:14.799 00:20:14.799 Latency(us) 00:20:14.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.799 =================================================================================================================== 00:20:14.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93650' 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93650 00:20:14.799 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93650 00:20:15.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93746 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93746 /var/tmp/bperf.sock 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93746 ']' 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.057 13:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:15.057 [2024-07-15 13:04:27.473615] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:20:15.057 [2024-07-15 13:04:27.473704] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93746 ] 00:20:15.315 [2024-07-15 13:04:27.607037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.315 [2024-07-15 13:04:27.664800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.246 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.246 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:16.246 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:16.246 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:16.246 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:16.246 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.246 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:16.247 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.247 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:16.247 13:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:16.813 nvme0n1 00:20:16.813 13:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:16.813 13:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.813 13:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:16.813 13:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.813 13:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:16.813 13:04:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:16.813 Running I/O for 2 seconds... 00:20:16.813 [2024-07-15 13:04:29.148446] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f6458 00:20:16.813 [2024-07-15 13:04:29.149570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.149616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.160606] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f4f40 00:20:16.813 [2024-07-15 13:04:29.161699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.161742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.173602] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fc998 00:20:16.813 [2024-07-15 13:04:29.174786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.174837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.185714] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f2d80 00:20:16.813 [2024-07-15 13:04:29.186547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.186589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.196837] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fcdd0 00:20:16.813 [2024-07-15 13:04:29.197618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.197677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.211554] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fc998 00:20:16.813 [2024-07-15 13:04:29.213076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.213123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.223172] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e12d8 00:20:16.813 [2024-07-15 13:04:29.224542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.224606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.234749] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ee5c8 00:20:16.813 [2024-07-15 13:04:29.235907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.235953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.246842] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f7da8 00:20:16.813 [2024-07-15 13:04:29.247675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.247718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.258485] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ea248 00:20:16.813 [2024-07-15 13:04:29.259209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.259267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:16.813 [2024-07-15 13:04:29.272502] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f4b08 00:20:16.813 [2024-07-15 13:04:29.274012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.813 [2024-07-15 13:04:29.274050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.283835] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190efae0 00:20:17.071 [2024-07-15 13:04:29.285223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.071 [2024-07-15 13:04:29.285269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.295742] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e1b48 00:20:17.071 [2024-07-15 13:04:29.297142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.071 [2024-07-15 13:04:29.297188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.307832] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fac10 00:20:17.071 [2024-07-15 13:04:29.308665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.071 [2024-07-15 13:04:29.308705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.319702] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f81e0 00:20:17.071 [2024-07-15 13:04:29.320466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.071 [2024-07-15 13:04:29.320514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.332449] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f1868 00:20:17.071 [2024-07-15 13:04:29.333669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.071 [2024-07-15 13:04:29.333715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.347341] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f2d80 00:20:17.071 [2024-07-15 13:04:29.349244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.071 [2024-07-15 13:04:29.349286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.355963] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f3e60 00:20:17.071 [2024-07-15 13:04:29.356885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.071 [2024-07-15 13:04:29.356924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.368200] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190eaab8 00:20:17.071 [2024-07-15 13:04:29.369120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.071 [2024-07-15 13:04:29.369164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.379622] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f1868 00:20:17.071 [2024-07-15 13:04:29.380400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.071 [2024-07-15 13:04:29.380440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:17.071 [2024-07-15 13:04:29.394182] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e88f8 00:20:17.072 [2024-07-15 13:04:29.395758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.395822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.405394] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190eee38 00:20:17.072 [2024-07-15 13:04:29.406715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.406758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.417038] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e6b70 00:20:17.072 [2024-07-15 13:04:29.418298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.418337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.431372] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190dece0 00:20:17.072 [2024-07-15 13:04:29.433302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.433343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.439916] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e1710 00:20:17.072 [2024-07-15 13:04:29.440881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.440919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.454336] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f4f40 00:20:17.072 [2024-07-15 13:04:29.455988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.456029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.465487] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f0ff8 00:20:17.072 [2024-07-15 13:04:29.467072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.467117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.477232] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e0a68 00:20:17.072 [2024-07-15 13:04:29.478568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.478610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.488405] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ea248 00:20:17.072 [2024-07-15 13:04:29.489535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.489577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.500152] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fc560 00:20:17.072 [2024-07-15 13:04:29.501217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.501257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.514515] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ec840 00:20:17.072 [2024-07-15 13:04:29.516247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.516287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.523205] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f0ff8 00:20:17.072 [2024-07-15 13:04:29.523992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.524030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:17.072 [2024-07-15 13:04:29.535316] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fa3a0 00:20:17.072 [2024-07-15 13:04:29.536101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.072 [2024-07-15 13:04:29.536142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:17.330 [2024-07-15 13:04:29.549374] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190efae0 00:20:17.330 [2024-07-15 13:04:29.550821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.330 [2024-07-15 13:04:29.550865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:17.330 [2024-07-15 13:04:29.560558] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e7818 00:20:17.330 [2024-07-15 13:04:29.562009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.330 [2024-07-15 13:04:29.562061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:17.330 [2024-07-15 13:04:29.571841] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ef6a8 00:20:17.330 [2024-07-15 13:04:29.572629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.330 [2024-07-15 13:04:29.572666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:17.330 [2024-07-15 13:04:29.586114] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e12d8 00:20:17.330 [2024-07-15 13:04:29.587740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.330 [2024-07-15 13:04:29.587790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:17.330 [2024-07-15 13:04:29.597229] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fc998 00:20:17.330 [2024-07-15 13:04:29.598595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.330 [2024-07-15 13:04:29.598634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:17.330 [2024-07-15 13:04:29.608842] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f5378 00:20:17.330 [2024-07-15 13:04:29.610166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.330 [2024-07-15 13:04:29.610203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:17.330 [2024-07-15 13:04:29.623278] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f46d0 00:20:17.330 [2024-07-15 13:04:29.625298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.625346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.631870] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e3d08 00:20:17.331 [2024-07-15 13:04:29.632918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.632957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.646102] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190de470 00:20:17.331 [2024-07-15 13:04:29.647823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.647860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.654567] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190df988 00:20:17.331 [2024-07-15 13:04:29.655318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.655357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.669505] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e49b0 00:20:17.331 [2024-07-15 13:04:29.671035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.671094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.682224] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e7c50 00:20:17.331 [2024-07-15 13:04:29.683655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.683698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.693726] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e5ec8 00:20:17.331 [2024-07-15 13:04:29.695179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.695227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.705716] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190df118 00:20:17.331 [2024-07-15 13:04:29.706912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.706955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.718203] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fda78 00:20:17.331 [2024-07-15 13:04:29.719388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.719431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.732953] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f4b08 00:20:17.331 [2024-07-15 13:04:29.734743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.734794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.745240] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e6b70 00:20:17.331 [2024-07-15 13:04:29.747100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.747146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.756890] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e7c50 00:20:17.331 [2024-07-15 13:04:29.758530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.758573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.765803] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fc998 00:20:17.331 [2024-07-15 13:04:29.766597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.766636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.780289] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f1430 00:20:17.331 [2024-07-15 13:04:29.781631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.781675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:17.331 [2024-07-15 13:04:29.791903] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e0630 00:20:17.331 [2024-07-15 13:04:29.793059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.331 [2024-07-15 13:04:29.793099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:17.589 [2024-07-15 13:04:29.803462] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e1b48 00:20:17.589 [2024-07-15 13:04:29.804474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.589 [2024-07-15 13:04:29.804513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:17.589 [2024-07-15 13:04:29.817681] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f7538 00:20:17.589 [2024-07-15 13:04:29.819505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.589 [2024-07-15 13:04:29.819542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:17.589 [2024-07-15 13:04:29.826312] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e9e10 00:20:17.589 [2024-07-15 13:04:29.827166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.589 [2024-07-15 13:04:29.827206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:17.589 [2024-07-15 13:04:29.838674] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f57b0 00:20:17.589 [2024-07-15 13:04:29.839559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.589 [2024-07-15 13:04:29.839626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:17.589 [2024-07-15 13:04:29.853348] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190dfdc0 00:20:17.589 [2024-07-15 13:04:29.854928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.589 [2024-07-15 13:04:29.854975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:17.589 [2024-07-15 13:04:29.865073] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190edd58 00:20:17.589 [2024-07-15 13:04:29.866462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.589 [2024-07-15 13:04:29.866502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:17.589 [2024-07-15 13:04:29.879091] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e6fa8 00:20:17.589 [2024-07-15 13:04:29.881085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.589 [2024-07-15 13:04:29.881127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:17.589 [2024-07-15 13:04:29.887909] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f92c0 00:20:17.589 [2024-07-15 13:04:29.888785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.589 [2024-07-15 13:04:29.888830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:17.589 [2024-07-15 13:04:29.902448] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190feb58 00:20:17.589 [2024-07-15 13:04:29.904126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:29.904166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:29.913233] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190dece0 00:20:17.590 [2024-07-15 13:04:29.915185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:29.915223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:29.923688] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fd208 00:20:17.590 [2024-07-15 13:04:29.924538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:29.924572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:29.938184] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e7c50 00:20:17.590 [2024-07-15 13:04:29.939725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:29.939785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:29.950777] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f9b30 00:20:17.590 [2024-07-15 13:04:29.952452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:29.952489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:29.962191] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190dece0 00:20:17.590 [2024-07-15 13:04:29.963523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:29.963566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:29.975737] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e99d8 00:20:17.590 [2024-07-15 13:04:29.977000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:29.977042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:29.987473] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e7c50 00:20:17.590 [2024-07-15 13:04:29.988868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:29.988906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:29.999548] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f8a50 00:20:17.590 [2024-07-15 13:04:30.000923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:30.000959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:30.010889] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f2d80 00:20:17.590 [2024-07-15 13:04:30.012133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:30.012169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:30.022444] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f1430 00:20:17.590 [2024-07-15 13:04:30.023149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:30.023185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:30.034048] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e27f0 00:20:17.590 [2024-07-15 13:04:30.035118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:30.035156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:17.590 [2024-07-15 13:04:30.045663] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f4f40 00:20:17.590 [2024-07-15 13:04:30.046714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.590 [2024-07-15 13:04:30.046750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.059981] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e0630 00:20:17.858 [2024-07-15 13:04:30.061689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.061726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.068434] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f2d80 00:20:17.858 [2024-07-15 13:04:30.069186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.069220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.082720] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ecc78 00:20:17.858 [2024-07-15 13:04:30.084165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.084202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.093859] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e3d08 00:20:17.858 [2024-07-15 13:04:30.095370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.095427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.105465] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190de038 00:20:17.858 [2024-07-15 13:04:30.106594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.106634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.119869] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e49b0 00:20:17.858 [2024-07-15 13:04:30.121680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.121718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.132030] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e6738 00:20:17.858 [2024-07-15 13:04:30.133891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.133927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.143424] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f3a28 00:20:17.858 [2024-07-15 13:04:30.145109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.145160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.152249] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190eee38 00:20:17.858 [2024-07-15 13:04:30.153097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.153148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.164475] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f92c0 00:20:17.858 [2024-07-15 13:04:30.165296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.165331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.178580] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f9f68 00:20:17.858 [2024-07-15 13:04:30.179600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.179637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.190080] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190feb58 00:20:17.858 [2024-07-15 13:04:30.191517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.191555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.201860] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e95a0 00:20:17.858 [2024-07-15 13:04:30.203221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.203264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.212998] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f0ff8 00:20:17.858 [2024-07-15 13:04:30.214113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.214162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.224961] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f2510 00:20:17.858 [2024-07-15 13:04:30.226039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.226074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.239382] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f96f8 00:20:17.858 [2024-07-15 13:04:30.241140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.241176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.247927] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190feb58 00:20:17.858 [2024-07-15 13:04:30.248682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.248715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.262355] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ed0b0 00:20:17.858 [2024-07-15 13:04:30.263813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.263849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.274396] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f4b08 00:20:17.858 [2024-07-15 13:04:30.275356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.275393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.285636] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f1ca0 00:20:17.858 [2024-07-15 13:04:30.287545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.287582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.299119] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190eb328 00:20:17.858 [2024-07-15 13:04:30.300568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.300604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:17.858 [2024-07-15 13:04:30.310425] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fc560 00:20:17.858 [2024-07-15 13:04:30.311874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.858 [2024-07-15 13:04:30.311909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.321725] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e99d8 00:20:18.126 [2024-07-15 13:04:30.322799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.322835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.333584] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e4de8 00:20:18.126 [2024-07-15 13:04:30.334743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.334791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.348958] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f5be8 00:20:18.126 [2024-07-15 13:04:30.350823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.350868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.359658] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e7818 00:20:18.126 [2024-07-15 13:04:30.360656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.360696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.371421] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f0ff8 00:20:18.126 [2024-07-15 13:04:30.372336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.372374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.382896] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190eaab8 00:20:18.126 [2024-07-15 13:04:30.383581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.383620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.396686] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ddc00 00:20:18.126 [2024-07-15 13:04:30.398239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.398279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.408346] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ee190 00:20:18.126 [2024-07-15 13:04:30.409718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.409755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.419399] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f1868 00:20:18.126 [2024-07-15 13:04:30.420608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.420644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.431069] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190de038 00:20:18.126 [2024-07-15 13:04:30.432273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.432309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.443293] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190df118 00:20:18.126 [2024-07-15 13:04:30.444472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.444507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.456942] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ef270 00:20:18.126 [2024-07-15 13:04:30.458618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.458653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.468142] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e1b48 00:20:18.126 [2024-07-15 13:04:30.469514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.469551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.480090] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ee5c8 00:20:18.126 [2024-07-15 13:04:30.481359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.481398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.493945] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e4140 00:20:18.126 [2024-07-15 13:04:30.495808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.495849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.502562] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e38d0 00:20:18.126 [2024-07-15 13:04:30.503559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.503611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.517325] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190df118 00:20:18.126 [2024-07-15 13:04:30.518941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.518984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.528512] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190df988 00:20:18.126 [2024-07-15 13:04:30.529813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.126 [2024-07-15 13:04:30.529853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:18.126 [2024-07-15 13:04:30.540195] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e2c28 00:20:18.126 [2024-07-15 13:04:30.541448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.127 [2024-07-15 13:04:30.541483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:18.127 [2024-07-15 13:04:30.554413] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f0350 00:20:18.127 [2024-07-15 13:04:30.556363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.127 [2024-07-15 13:04:30.556404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:18.127 [2024-07-15 13:04:30.562930] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f7538 00:20:18.127 [2024-07-15 13:04:30.563898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.127 [2024-07-15 13:04:30.563935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:18.127 [2024-07-15 13:04:30.577262] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e99d8 00:20:18.127 [2024-07-15 13:04:30.578912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.127 [2024-07-15 13:04:30.578950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:18.127 [2024-07-15 13:04:30.589380] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e5ec8 00:20:18.127 [2024-07-15 13:04:30.591034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.127 [2024-07-15 13:04:30.591071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:18.385 [2024-07-15 13:04:30.601023] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e0a68 00:20:18.385 [2024-07-15 13:04:30.602577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.385 [2024-07-15 13:04:30.602620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:18.385 [2024-07-15 13:04:30.613564] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f3a28 00:20:18.385 [2024-07-15 13:04:30.614950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.385 [2024-07-15 13:04:30.614992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:18.385 [2024-07-15 13:04:30.625170] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f2948 00:20:18.385 [2024-07-15 13:04:30.626388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.385 [2024-07-15 13:04:30.626428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:18.385 [2024-07-15 13:04:30.636684] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f8618 00:20:18.385 [2024-07-15 13:04:30.637725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.385 [2024-07-15 13:04:30.637777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:18.385 [2024-07-15 13:04:30.650805] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ef6a8 00:20:18.385 [2024-07-15 13:04:30.652490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.652532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.661471] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e5658 00:20:18.386 [2024-07-15 13:04:30.662728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.662788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.674934] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e9e10 00:20:18.386 [2024-07-15 13:04:30.676665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.676713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.686116] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e95a0 00:20:18.386 [2024-07-15 13:04:30.687550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.687590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.697980] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190de8a8 00:20:18.386 [2024-07-15 13:04:30.699212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.699264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.709410] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e1b48 00:20:18.386 [2024-07-15 13:04:30.710511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.710552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.720939] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e88f8 00:20:18.386 [2024-07-15 13:04:30.721898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.721936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.732819] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fdeb0 00:20:18.386 [2024-07-15 13:04:30.733567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.733606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.748339] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ef270 00:20:18.386 [2024-07-15 13:04:30.750122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.750166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.760115] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fb048 00:20:18.386 [2024-07-15 13:04:30.761723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.761761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.771856] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f0ff8 00:20:18.386 [2024-07-15 13:04:30.773309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.773350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.783538] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e0a68 00:20:18.386 [2024-07-15 13:04:30.784834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.784877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.794929] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f20d8 00:20:18.386 [2024-07-15 13:04:30.796066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.796105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.806609] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ddc00 00:20:18.386 [2024-07-15 13:04:30.807562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.807599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.820721] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190de8a8 00:20:18.386 [2024-07-15 13:04:30.822480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.822531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.829249] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f7970 00:20:18.386 [2024-07-15 13:04:30.830022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.830057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:18.386 [2024-07-15 13:04:30.843477] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fb480 00:20:18.386 [2024-07-15 13:04:30.844973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.386 [2024-07-15 13:04:30.845009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.855562] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e1f80 00:20:18.644 [2024-07-15 13:04:30.857006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.644 [2024-07-15 13:04:30.857055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.869097] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e88f8 00:20:18.644 [2024-07-15 13:04:30.871053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.644 [2024-07-15 13:04:30.871090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.877547] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190dfdc0 00:20:18.644 [2024-07-15 13:04:30.878372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.644 [2024-07-15 13:04:30.878408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.892762] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ea680 00:20:18.644 [2024-07-15 13:04:30.894614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.644 [2024-07-15 13:04:30.894649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.901612] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fc128 00:20:18.644 [2024-07-15 13:04:30.902623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.644 [2024-07-15 13:04:30.902676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.913934] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fbcf0 00:20:18.644 [2024-07-15 13:04:30.914927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.644 [2024-07-15 13:04:30.914961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.925459] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e8d30 00:20:18.644 [2024-07-15 13:04:30.926282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.644 [2024-07-15 13:04:30.926315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.940070] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ec840 00:20:18.644 [2024-07-15 13:04:30.941726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.644 [2024-07-15 13:04:30.941775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.952234] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190df988 00:20:18.644 [2024-07-15 13:04:30.953874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.644 [2024-07-15 13:04:30.953911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:18.644 [2024-07-15 13:04:30.963813] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e01f8 00:20:18.645 [2024-07-15 13:04:30.965310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:30.965346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:30.974622] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e4de8 00:20:18.645 [2024-07-15 13:04:30.976580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:30.976617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:30.986460] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190de8a8 00:20:18.645 [2024-07-15 13:04:30.987327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:30.987365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:30.999464] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190dece0 00:20:18.645 [2024-07-15 13:04:31.000313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.000349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:31.015347] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fa7d8 00:20:18.645 [2024-07-15 13:04:31.017302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.017340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:31.023821] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e6738 00:20:18.645 [2024-07-15 13:04:31.024794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.024828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:31.035963] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f96f8 00:20:18.645 [2024-07-15 13:04:31.036946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.036981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:31.047269] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190eb328 00:20:18.645 [2024-07-15 13:04:31.048110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.048146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:31.061803] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f92c0 00:20:18.645 [2024-07-15 13:04:31.063285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.063321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:31.073064] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f0ff8 00:20:18.645 [2024-07-15 13:04:31.074370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.074405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:31.084308] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190fac10 00:20:18.645 [2024-07-15 13:04:31.085496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.085547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:31.095599] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e95a0 00:20:18.645 [2024-07-15 13:04:31.096623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.096661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:18.645 [2024-07-15 13:04:31.107121] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190e4140 00:20:18.645 [2024-07-15 13:04:31.108012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.645 [2024-07-15 13:04:31.108051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:18.902 [2024-07-15 13:04:31.119629] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190ec408 00:20:18.902 [2024-07-15 13:04:31.120636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.902 [2024-07-15 13:04:31.120679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:18.902 [2024-07-15 13:04:31.134209] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156cc70) with pdu=0x2000190f7538 00:20:18.902 [2024-07-15 13:04:31.135933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.902 [2024-07-15 13:04:31.135973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.902 00:20:18.902 Latency(us) 00:20:18.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.902 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:18.902 nvme0n1 : 2.00 21053.92 82.24 0.00 0.00 6073.26 2487.39 16324.42 00:20:18.902 =================================================================================================================== 00:20:18.902 Total : 21053.92 82.24 0.00 0.00 6073.26 2487.39 16324.42 00:20:18.902 0 00:20:18.902 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:18.902 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:18.902 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:18.902 | .driver_specific 00:20:18.902 | .nvme_error 00:20:18.902 | .status_code 00:20:18.902 | .command_transient_transport_error' 00:20:18.902 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93746 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93746 ']' 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93746 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93746 00:20:19.160 killing process with pid 93746 00:20:19.160 Received shutdown signal, test time was about 2.000000 seconds 00:20:19.160 00:20:19.160 Latency(us) 00:20:19.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.160 =================================================================================================================== 00:20:19.160 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93746' 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93746 00:20:19.160 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93746 00:20:19.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93832 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93832 /var/tmp/bperf.sock 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93832 ']' 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.418 13:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:19.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:19.418 Zero copy mechanism will not be used. 00:20:19.418 [2024-07-15 13:04:31.694798] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:20:19.418 [2024-07-15 13:04:31.694898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93832 ] 00:20:19.418 [2024-07-15 13:04:31.828194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.418 [2024-07-15 13:04:31.885716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.351 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.351 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:20.351 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:20.351 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:20.610 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:20.610 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.610 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.610 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.610 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:20.610 13:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:20.868 nvme0n1 00:20:20.868 13:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:20.868 13:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.868 13:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.868 13:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.868 13:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:20.868 13:04:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:21.127 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:21.127 Zero copy mechanism will not be used. 00:20:21.127 Running I/O for 2 seconds... 00:20:21.127 [2024-07-15 13:04:33.353846] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.354155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.354186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.359178] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.359485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.359515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.364566] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.364876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.364907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.369977] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.370277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.370305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.375282] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.375582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.375611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.381499] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.381821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.381856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.386877] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.387173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.387209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.392110] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.392405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.392432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.397336] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.397629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.397665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.402621] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.402930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.402967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.407924] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.408232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.408268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.413247] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.413543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.413580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.418553] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.418862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.418897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.423974] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.424268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.424298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.429217] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.429516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.429552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.434540] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.434844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.434875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.439853] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.440150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.440182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.445147] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.445438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.445475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.450362] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.450654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.450686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.455627] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.455934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.455970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.460860] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.461149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.461185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.466093] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.466384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.466412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.471345] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.471635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.471671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.476614] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.476927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.476963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.481860] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.482151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.482176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.487071] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.487371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.487404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.492277] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.492566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.492601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.497485] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.497788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.497824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.502672] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.502978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.503013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.507930] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.508225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.508262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.513188] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.513479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.513504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.518373] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.518666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.518696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.523577] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.523883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.523919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.528866] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.529158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.529186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.534094] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.534385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.534416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.539289] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.539586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.539622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.544508] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.544830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.544860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.549757] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.550062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.550095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.555027] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.555328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.555358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.560234] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.560527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.560568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.565526] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.565854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.127 [2024-07-15 13:04:33.565890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.127 [2024-07-15 13:04:33.570745] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.127 [2024-07-15 13:04:33.571048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.128 [2024-07-15 13:04:33.571080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.128 [2024-07-15 13:04:33.576088] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.128 [2024-07-15 13:04:33.576385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.128 [2024-07-15 13:04:33.576421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.128 [2024-07-15 13:04:33.581326] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.128 [2024-07-15 13:04:33.581618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.128 [2024-07-15 13:04:33.581655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.128 [2024-07-15 13:04:33.586631] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.128 [2024-07-15 13:04:33.586937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.128 [2024-07-15 13:04:33.586972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.128 [2024-07-15 13:04:33.591897] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.128 [2024-07-15 13:04:33.592190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.128 [2024-07-15 13:04:33.592225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.597129] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.597421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.597457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.602377] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.602668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.602694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.607642] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.607950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.607989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.612983] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.613289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.613317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.618347] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.618640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.618672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.623684] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.623988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.624024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.628990] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.629286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.629315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.634331] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.634622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.634651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.641349] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.641688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.641717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.648326] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.648665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.648694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.655214] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.655556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.655584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.662035] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.662367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.662395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.668909] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.669242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.669269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.675757] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.676107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.676135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.682632] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.682975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.683004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.689523] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.689867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.689895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.696342] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.696638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.696667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.701619] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.701927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.701951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.706810] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.707103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.707131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.712163] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.712457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.712487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.717396] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.717687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.717716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.722605] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.722912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.722936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.727830] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.728123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.386 [2024-07-15 13:04:33.728151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.386 [2024-07-15 13:04:33.733096] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.386 [2024-07-15 13:04:33.733393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.733422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.738338] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.738627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.738655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.743604] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.743908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.743938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.748907] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.749198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.749226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.754152] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.754443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.754472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.759392] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.759687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.759715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.764589] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.764893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.764921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.769845] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.770138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.770166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.775094] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.775393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.775420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.780356] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.780646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.780674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.785603] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.785906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.785930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.790844] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.791134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.791163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.796175] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.796496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.796524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.801519] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.801855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.801883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.806843] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.807128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.807156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.812153] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.812448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.812476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.817408] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.817717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.817746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.822746] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.823050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.823078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.828099] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.828407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.828436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.833499] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.833805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.833845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.838806] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.839114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.839143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.844095] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.844385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.844414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.387 [2024-07-15 13:04:33.849367] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.387 [2024-07-15 13:04:33.849657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.387 [2024-07-15 13:04:33.849686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.854685] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.854990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.855019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.859965] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.860257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.860286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.865149] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.865440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.865468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.870408] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.870715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.870745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.875656] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.875970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.875998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.880909] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.881218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.881246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.886160] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.886452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.886480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.891500] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.891821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.891849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.896847] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.897134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.897163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.902231] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.902539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.902568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.907650] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.907957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.907985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.912974] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.913270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.913297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.918242] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.645 [2024-07-15 13:04:33.918537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.645 [2024-07-15 13:04:33.918565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.645 [2024-07-15 13:04:33.923618] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.923922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.923950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.929158] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.929468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.929497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.934576] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.934885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.934913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.939877] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.940173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.940201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.945107] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.945400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.945428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.950417] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.950707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.950736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.955655] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.955960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.955988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.961026] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.961352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.961387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.966411] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.966728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.966757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.971796] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.972119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.972150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.977158] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.977473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.977507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.982535] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.982883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.982917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.987930] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.988245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.988274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.993442] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.993774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.993818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:33.998846] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:33.999171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:33.999204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.004165] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.004487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.004520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.009585] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.009922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.009955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.014901] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.015216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.015253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.020267] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.020578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.020608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.025543] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.025864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.025888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.030729] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.031035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.031064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.035945] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.036236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.036265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.042927] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.043271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.043304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.048377] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.048675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.048704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.054094] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.054436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.054464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.059691] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.059999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.060027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.065042] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.065334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.065363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.646 [2024-07-15 13:04:34.070526] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.646 [2024-07-15 13:04:34.070890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.646 [2024-07-15 13:04:34.070921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.647 [2024-07-15 13:04:34.075916] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.647 [2024-07-15 13:04:34.076224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.647 [2024-07-15 13:04:34.076255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.647 [2024-07-15 13:04:34.081204] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.647 [2024-07-15 13:04:34.081496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.647 [2024-07-15 13:04:34.081525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.647 [2024-07-15 13:04:34.086558] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.647 [2024-07-15 13:04:34.086895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.647 [2024-07-15 13:04:34.086934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.647 [2024-07-15 13:04:34.091873] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.647 [2024-07-15 13:04:34.092180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.647 [2024-07-15 13:04:34.092208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.647 [2024-07-15 13:04:34.097975] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.647 [2024-07-15 13:04:34.098309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.647 [2024-07-15 13:04:34.098337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.647 [2024-07-15 13:04:34.103682] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.647 [2024-07-15 13:04:34.104006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.647 [2024-07-15 13:04:34.104034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.647 [2024-07-15 13:04:34.109179] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.647 [2024-07-15 13:04:34.109507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.647 [2024-07-15 13:04:34.109536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.114619] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.114940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.114969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.120037] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.120334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.120361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.125329] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.125638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.125667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.130574] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.130896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.130925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.135882] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.136182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.136209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.141165] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.141458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.141486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.146533] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.146851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.146880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.151886] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.152180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.152209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.157132] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.157426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.157467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.162382] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.162679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.162708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.167619] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.167928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.167957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.172920] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.173210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.173239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.178169] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.907 [2024-07-15 13:04:34.178460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.907 [2024-07-15 13:04:34.178488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.907 [2024-07-15 13:04:34.183533] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.183853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.183882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.188987] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.189283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.189311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.194218] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.194509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.194539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.199567] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.199895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.199924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.204839] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.205130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.205158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.210075] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.210381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.210410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.215359] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.215650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.215678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.220599] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.220906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.220929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.225834] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.226127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.226155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.231157] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.231475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.231502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.236454] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.236747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.236786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.241673] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.241995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.242023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.246900] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.247202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.247231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.252193] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.252483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.252513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.257535] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.257853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.257881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.262775] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.263065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.263093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.268025] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.268316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.268357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.273306] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.273594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.273622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.278623] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.278943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.278971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.283904] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.284196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.284218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.289180] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.289471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.289499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.294464] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.294784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.294812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.299717] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.300025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.300053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.305051] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.305346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.305374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.310344] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.310636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.310664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.315664] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.315980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.316009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.321056] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.321350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.321379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.326354] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.326645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.326674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.331718] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.908 [2024-07-15 13:04:34.332039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.908 [2024-07-15 13:04:34.332067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.908 [2024-07-15 13:04:34.337033] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.909 [2024-07-15 13:04:34.337328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.909 [2024-07-15 13:04:34.337357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.909 [2024-07-15 13:04:34.342373] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.909 [2024-07-15 13:04:34.342681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.909 [2024-07-15 13:04:34.342710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.909 [2024-07-15 13:04:34.347726] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.909 [2024-07-15 13:04:34.348050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.909 [2024-07-15 13:04:34.348079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:21.909 [2024-07-15 13:04:34.353038] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.909 [2024-07-15 13:04:34.353334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.909 [2024-07-15 13:04:34.353363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:21.909 [2024-07-15 13:04:34.358420] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.909 [2024-07-15 13:04:34.358716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.909 [2024-07-15 13:04:34.358740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:21.909 [2024-07-15 13:04:34.363948] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.909 [2024-07-15 13:04:34.364253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.909 [2024-07-15 13:04:34.364281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.909 [2024-07-15 13:04:34.369350] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:21.909 [2024-07-15 13:04:34.369654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.909 [2024-07-15 13:04:34.369683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.374624] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.374929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.374958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.379961] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.380263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.380291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.385262] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.385567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.385596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.391396] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.391745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.391796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.399435] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.399810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.399841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.407012] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.407375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.407405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.414426] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.414783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.414820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.421586] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.421955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.421989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.428879] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.429225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.429261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.436356] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.436708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.436737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.443738] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.444118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.444147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.451132] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.451475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.451504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.458651] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.459006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.459035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.466017] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.466317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.466346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.471223] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.471527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.471556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.476523] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.476832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.476855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.481755] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.482071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.482099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.487171] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.487489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.487516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.492480] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.492786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.492815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.497787] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.498076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.498105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.503001] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.503302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.503329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.508264] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.508559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.508587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.513519] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.513837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.513865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.518740] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.519045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.519074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.523959] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.524255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.524284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.529206] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.529495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.169 [2024-07-15 13:04:34.529523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.169 [2024-07-15 13:04:34.534359] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.169 [2024-07-15 13:04:34.534647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.534676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.539596] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.539911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.539935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.544975] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.545287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.545315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.550335] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.550626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.550655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.555539] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.555843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.555871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.560732] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.561035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.561063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.566039] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.566336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.566365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.571294] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.571589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.571624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.576546] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.576852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.576881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.581740] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.582048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.582076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.586970] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.587275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.587303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.592238] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.592536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.592565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.597496] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.597834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.597862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.602699] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.603017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.603046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.607932] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.608253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.608281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.613222] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.613521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.613549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.618534] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.618859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.618890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.623871] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.624169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.624199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.629164] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.629457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.629485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.170 [2024-07-15 13:04:34.634402] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.170 [2024-07-15 13:04:34.634693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.170 [2024-07-15 13:04:34.634721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.639657] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.639962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.639991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.644893] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.645191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.645219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.650178] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.650469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.650499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.655450] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.655742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.655780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.660795] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.661143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.661171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.667659] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.667982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.668013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.673189] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.673495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.673526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.678744] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.679061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.679090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.684203] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.684495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.684533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.689630] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.689935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.430 [2024-07-15 13:04:34.689972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.430 [2024-07-15 13:04:34.694858] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.430 [2024-07-15 13:04:34.695154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.695182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.700192] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.700499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.700526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.705566] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.705886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.705914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.711014] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.711336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.711363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.716409] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.716713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.716745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.721744] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.722072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.722106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.727115] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.727430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.727463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.732555] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.732885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.732917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.737965] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.738277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.738309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.743258] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.743551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.743579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.748647] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.748971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.749003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.754011] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.754316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.754348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.759371] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.759664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.759692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.764677] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.764992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.765020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.770119] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.770412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.770440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.775386] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.775681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.775708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.780602] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.780908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.780936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.785837] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.786129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.786157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.791091] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.791392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.791420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.796373] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.796661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.796703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.801647] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.801950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.801979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.806923] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.807221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.807256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.812151] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.812446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.812474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.817402] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.817692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.817715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.822656] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.822961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.822988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.827918] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.828216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.828244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.833141] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.833434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.833471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.838476] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.838791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.838820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.843926] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.844226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.431 [2024-07-15 13:04:34.844256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.431 [2024-07-15 13:04:34.849339] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.431 [2024-07-15 13:04:34.849656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.432 [2024-07-15 13:04:34.849686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.432 [2024-07-15 13:04:34.854706] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.432 [2024-07-15 13:04:34.855013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.432 [2024-07-15 13:04:34.855043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.432 [2024-07-15 13:04:34.860128] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.432 [2024-07-15 13:04:34.860431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.432 [2024-07-15 13:04:34.860462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.432 [2024-07-15 13:04:34.865575] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.432 [2024-07-15 13:04:34.865911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.432 [2024-07-15 13:04:34.865942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.432 [2024-07-15 13:04:34.871061] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.432 [2024-07-15 13:04:34.871370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.432 [2024-07-15 13:04:34.871399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.432 [2024-07-15 13:04:34.876550] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.432 [2024-07-15 13:04:34.876889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.432 [2024-07-15 13:04:34.876927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.432 [2024-07-15 13:04:34.882084] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.432 [2024-07-15 13:04:34.882381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.432 [2024-07-15 13:04:34.882411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.432 [2024-07-15 13:04:34.887399] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.432 [2024-07-15 13:04:34.887692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.432 [2024-07-15 13:04:34.887721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.432 [2024-07-15 13:04:34.892609] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.432 [2024-07-15 13:04:34.892915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.432 [2024-07-15 13:04:34.892944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.897981] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.898273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.898296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.903353] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.903660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.903689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.908601] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.908909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.908937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.913964] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.914256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.914284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.919300] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.919591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.919618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.924515] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.924833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.924856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.929755] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.930066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.930094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.935001] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.935298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.935327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.940189] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.940479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.940501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.945424] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.945725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.945755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.950680] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.950994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.951022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.955950] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.956247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.956270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.693 [2024-07-15 13:04:34.961170] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.693 [2024-07-15 13:04:34.961462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.693 [2024-07-15 13:04:34.961492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:34.966384] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:34.966675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:34.966711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:34.971645] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:34.971945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:34.971969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:34.976858] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:34.977159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:34.977187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:34.982152] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:34.982465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:34.982494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:34.987412] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:34.987704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:34.987732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:34.992747] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:34.993072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:34.993100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:34.997941] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:34.998230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:34.998266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.003301] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.003608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.003648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.008559] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.008884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.008913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.013807] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.014099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.014127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.019168] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.019469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.019497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.024562] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.024884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.024913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.030006] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.030300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.030328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.035321] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.035611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.035640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.040633] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.040955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.040983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.045993] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.046285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.046312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.051265] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.051555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.051583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.056534] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.056840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.056868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.061787] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.062076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.062104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.067045] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.067357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.067386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.072299] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.072590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.072619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.077555] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.077857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.077885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.082850] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.083153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.083180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.088154] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.088458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.088486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.093394] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.093686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.093716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.098718] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.099044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.099072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.104005] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.104298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.104326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.109283] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.109572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.109601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.694 [2024-07-15 13:04:35.114578] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.694 [2024-07-15 13:04:35.114895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.694 [2024-07-15 13:04:35.114924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.695 [2024-07-15 13:04:35.119939] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.695 [2024-07-15 13:04:35.120234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.695 [2024-07-15 13:04:35.120263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.695 [2024-07-15 13:04:35.125175] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.695 [2024-07-15 13:04:35.125464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.695 [2024-07-15 13:04:35.125493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.695 [2024-07-15 13:04:35.130368] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.695 [2024-07-15 13:04:35.130659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.695 [2024-07-15 13:04:35.130688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.695 [2024-07-15 13:04:35.135685] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.695 [2024-07-15 13:04:35.135999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.695 [2024-07-15 13:04:35.136028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.695 [2024-07-15 13:04:35.140962] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.695 [2024-07-15 13:04:35.141252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.695 [2024-07-15 13:04:35.141280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.695 [2024-07-15 13:04:35.146264] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.695 [2024-07-15 13:04:35.146556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.695 [2024-07-15 13:04:35.146585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.695 [2024-07-15 13:04:35.151520] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.695 [2024-07-15 13:04:35.151840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.695 [2024-07-15 13:04:35.151869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.695 [2024-07-15 13:04:35.156899] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.695 [2024-07-15 13:04:35.157210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.695 [2024-07-15 13:04:35.157237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.954 [2024-07-15 13:04:35.162201] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.954 [2024-07-15 13:04:35.162493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.954 [2024-07-15 13:04:35.162522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.954 [2024-07-15 13:04:35.167548] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.954 [2024-07-15 13:04:35.167878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.954 [2024-07-15 13:04:35.167907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.954 [2024-07-15 13:04:35.172809] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.954 [2024-07-15 13:04:35.173095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.954 [2024-07-15 13:04:35.173123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.954 [2024-07-15 13:04:35.178123] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.954 [2024-07-15 13:04:35.178414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.954 [2024-07-15 13:04:35.178443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.954 [2024-07-15 13:04:35.183513] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.954 [2024-07-15 13:04:35.183834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.954 [2024-07-15 13:04:35.183862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.954 [2024-07-15 13:04:35.188716] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.189021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.189049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.193913] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.194204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.194232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.199093] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.199397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.199425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.204306] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.204594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.204622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.209490] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.209793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.209816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.214688] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.214990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.215018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.219915] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.220204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.220233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.225082] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.225376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.225403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.230312] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.230602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.230632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.235641] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.235961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.235989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.240879] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.241171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.241199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.246091] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.246396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.246425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.251419] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.251715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.251743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.256627] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.256937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.256961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.261967] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.262268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.262297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.267192] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.267504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.267533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.272488] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.272793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.272822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.277701] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.278015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.278044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.282990] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.283292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.283320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.288330] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.288621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.288650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.293636] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.293942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.293970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.298877] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.299170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.299199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.304140] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.304432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.304460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.309369] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.309658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.309687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.314558] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.314876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.314904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.319790] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.320078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.320106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.325043] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.325337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.325366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.330313] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.330604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.955 [2024-07-15 13:04:35.330632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:22.955 [2024-07-15 13:04:35.335675] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.955 [2024-07-15 13:04:35.335981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.956 [2024-07-15 13:04:35.336009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:22.956 [2024-07-15 13:04:35.340871] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156ce10) with pdu=0x2000190fef90 00:20:22.956 [2024-07-15 13:04:35.341164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.956 [2024-07-15 13:04:35.341193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.956 00:20:22.956 Latency(us) 00:20:22.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.956 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:22.956 nvme0n1 : 2.00 5710.77 713.85 0.00 0.00 2795.14 2353.34 8698.41 00:20:22.956 =================================================================================================================== 00:20:22.956 Total : 5710.77 713.85 0.00 0.00 2795.14 2353.34 8698.41 00:20:22.956 0 00:20:22.956 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:22.956 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:22.956 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:22.956 | .driver_specific 00:20:22.956 | .nvme_error 00:20:22.956 | .status_code 00:20:22.956 | .command_transient_transport_error' 00:20:22.956 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 368 > 0 )) 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93832 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93832 ']' 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93832 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93832 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93832' 00:20:23.214 killing process with pid 93832 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93832 00:20:23.214 Received shutdown signal, test time was about 2.000000 seconds 00:20:23.214 00:20:23.214 Latency(us) 00:20:23.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.214 =================================================================================================================== 00:20:23.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.214 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93832 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93520 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93520 ']' 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93520 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93520 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:23.472 killing process with pid 93520 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93520' 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93520 00:20:23.472 13:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93520 00:20:23.730 00:20:23.730 real 0m18.782s 00:20:23.730 user 0m36.767s 00:20:23.730 sys 0m4.398s 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:23.730 ************************************ 00:20:23.730 END TEST nvmf_digest_error 00:20:23.730 ************************************ 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # nvmfcleanup 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:23.730 rmmod nvme_tcp 00:20:23.730 rmmod nvme_fabrics 00:20:23.730 rmmod nvme_keyring 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@493 -- # '[' -n 93520 ']' 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@494 -- # killprocess 93520 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93520 ']' 00:20:23.730 Process with pid 93520 is not found 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93520 00:20:23.730 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93520) - No such process 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93520 is not found' 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@282 -- # remove_spdk_ns 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:20:23.730 00:20:23.730 real 0m38.969s 00:20:23.730 user 1m16.663s 00:20:23.730 sys 0m9.331s 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.730 13:04:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:23.730 ************************************ 00:20:23.730 END TEST nvmf_digest 00:20:23.730 ************************************ 00:20:23.988 13:04:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:23.988 13:04:36 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:20:23.988 13:04:36 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ tcp == \t\c\p ]] 00:20:23.988 13:04:36 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:23.988 13:04:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:23.988 13:04:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.988 13:04:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.988 ************************************ 00:20:23.988 START TEST nvmf_mdns_discovery 00:20:23.988 ************************************ 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:23.988 * Looking for test storage... 00:20:23.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.988 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@452 -- # prepare_net_devs 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # local -g is_hw=no 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # remove_spdk_ns 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@436 -- # nvmf_veth_init 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.988 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:20:23.989 Cannot find device "nvmf_tgt_br" 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.989 Cannot find device "nvmf_tgt_br2" 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # true 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:20:23.989 Cannot find device "nvmf_tgt_br" 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:20:23.989 Cannot find device "nvmf_tgt_br2" 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:20:23.989 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:20:24.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:24.247 00:20:24.247 --- 10.0.0.2 ping statistics --- 00:20:24.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.247 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:20:24.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:24.247 00:20:24.247 --- 10.0.0.3 ping statistics --- 00:20:24.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.247 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:24.247 00:20:24.247 --- 10.0.0.1 ping statistics --- 00:20:24.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.247 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@437 -- # return 0 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:20:24.247 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@485 -- # nvmfpid=94127 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@486 -- # waitforlisten 94127 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94127 ']' 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.505 13:04:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.505 [2024-07-15 13:04:36.780751] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:20:24.505 [2024-07-15 13:04:36.780877] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.505 [2024-07-15 13:04:36.920557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.763 [2024-07-15 13:04:36.977395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.763 [2024-07-15 13:04:36.977451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.763 [2024-07-15 13:04:36.977463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.763 [2024-07-15 13:04:36.977472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.763 [2024-07-15 13:04:36.977479] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.763 [2024-07-15 13:04:36.977503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.329 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.329 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:25.329 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:20:25.329 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.329 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 [2024-07-15 13:04:37.900878] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 [2024-07-15 13:04:37.909009] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 null0 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 null1 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 null2 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 null3 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94177 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94177 /tmp/host.sock 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94177 ']' 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.588 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.588 13:04:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 [2024-07-15 13:04:38.018005] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:20:25.588 [2024-07-15 13:04:38.018120] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94177 ] 00:20:25.847 [2024-07-15 13:04:38.154592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.847 [2024-07-15 13:04:38.224057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.847 13:04:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.847 13:04:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:25.847 13:04:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:25.847 13:04:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:25.847 13:04:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:26.111 13:04:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94187 00:20:26.111 13:04:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:26.111 13:04:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:26.111 13:04:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:26.111 Process 980 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:26.111 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:26.111 Successfully dropped root privileges. 00:20:26.111 avahi-daemon 0.8 starting up. 00:20:26.111 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:26.111 Successfully called chroot(). 00:20:26.111 Successfully dropped remaining capabilities. 00:20:26.111 No service file found in /etc/avahi/services. 00:20:27.093 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:27.093 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:27.093 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:27.093 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:27.093 Network interface enumeration completed. 00:20:27.093 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:20:27.093 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:20:27.093 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:20:27.093 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:20:27.093 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 2820856284. 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:27.093 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:27.352 [2024-07-15 13:04:39.736050] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.352 [2024-07-15 13:04:39.789467] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.352 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.611 [2024-07-15 13:04:39.829397] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.611 [2024-07-15 13:04:39.837389] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.611 13:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:20:28.179 [2024-07-15 13:04:40.636057] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:29.139 [2024-07-15 13:04:41.236085] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.139 [2024-07-15 13:04:41.236135] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:29.139 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.139 cookie is 0 00:20:29.139 is_local: 1 00:20:29.139 our_own: 0 00:20:29.139 wide_area: 0 00:20:29.139 multicast: 1 00:20:29.139 cached: 1 00:20:29.139 [2024-07-15 13:04:41.336072] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.139 [2024-07-15 13:04:41.336121] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:29.139 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.139 cookie is 0 00:20:29.139 is_local: 1 00:20:29.139 our_own: 0 00:20:29.139 wide_area: 0 00:20:29.139 multicast: 1 00:20:29.139 cached: 1 00:20:29.139 [2024-07-15 13:04:41.336137] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:29.139 [2024-07-15 13:04:41.436072] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.139 [2024-07-15 13:04:41.436114] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:29.139 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.139 cookie is 0 00:20:29.139 is_local: 1 00:20:29.139 our_own: 0 00:20:29.139 wide_area: 0 00:20:29.139 multicast: 1 00:20:29.139 cached: 1 00:20:29.139 [2024-07-15 13:04:41.536075] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:29.139 [2024-07-15 13:04:41.536122] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:29.139 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:29.139 cookie is 0 00:20:29.139 is_local: 1 00:20:29.139 our_own: 0 00:20:29.139 wide_area: 0 00:20:29.139 multicast: 1 00:20:29.139 cached: 1 00:20:29.139 [2024-07-15 13:04:41.536138] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:30.073 [2024-07-15 13:04:42.247509] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:30.073 [2024-07-15 13:04:42.247555] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:30.073 [2024-07-15 13:04:42.247577] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:30.073 [2024-07-15 13:04:42.333620] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:20:30.073 [2024-07-15 13:04:42.390528] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:30.073 [2024-07-15 13:04:42.390564] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:30.073 [2024-07-15 13:04:42.447351] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:30.073 [2024-07-15 13:04:42.447406] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:30.073 [2024-07-15 13:04:42.447428] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:30.073 [2024-07-15 13:04:42.533486] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:20:30.338 [2024-07-15 13:04:42.589570] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:30.338 [2024-07-15 13:04:42.589613] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:32.872 13:04:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.872 13:04:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:20:33.806 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:20:33.806 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:33.806 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.806 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:33.806 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.806 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:33.806 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.066 [2024-07-15 13:04:46.384401] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:34.066 [2024-07-15 13:04:46.384814] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:34.066 [2024-07-15 13:04:46.384858] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:34.066 [2024-07-15 13:04:46.384898] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:34.066 [2024-07-15 13:04:46.384913] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:34.066 [2024-07-15 13:04:46.392323] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:34.066 [2024-07-15 13:04:46.392810] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:34.066 [2024-07-15 13:04:46.392879] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.066 13:04:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:20:34.066 [2024-07-15 13:04:46.524943] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:20:34.066 [2024-07-15 13:04:46.525182] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:20:34.324 [2024-07-15 13:04:46.583270] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:34.324 [2024-07-15 13:04:46.583303] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:34.324 [2024-07-15 13:04:46.583312] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:34.324 [2024-07-15 13:04:46.583332] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:34.324 [2024-07-15 13:04:46.583390] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:34.324 [2024-07-15 13:04:46.583401] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:34.324 [2024-07-15 13:04:46.583407] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:34.324 [2024-07-15 13:04:46.583422] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:34.324 [2024-07-15 13:04:46.629062] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:34.324 [2024-07-15 13:04:46.629095] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:34.324 [2024-07-15 13:04:46.629146] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:34.324 [2024-07-15 13:04:46.629157] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:35.253 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.513 [2024-07-15 13:04:47.729889] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:35.513 [2024-07-15 13:04:47.729932] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:35.513 [2024-07-15 13:04:47.729972] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:35.513 [2024-07-15 13:04:47.729988] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:35.513 [2024-07-15 13:04:47.730059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.513 [2024-07-15 13:04:47.730094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.513 [2024-07-15 13:04:47.730109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.513 [2024-07-15 13:04:47.730118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.513 [2024-07-15 13:04:47.730128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.513 [2024-07-15 13:04:47.730138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.513 [2024-07-15 13:04:47.730147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.513 [2024-07-15 13:04:47.730156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.513 [2024-07-15 13:04:47.730166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.513 [2024-07-15 13:04:47.737887] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:35.513 [2024-07-15 13:04:47.737952] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:35.513 [2024-07-15 13:04:47.738010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.513 [2024-07-15 13:04:47.738038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.513 [2024-07-15 13:04:47.738051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.513 [2024-07-15 13:04:47.738061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.513 [2024-07-15 13:04:47.738071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.513 [2024-07-15 13:04:47.738079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.513 [2024-07-15 13:04:47.738097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.513 [2024-07-15 13:04:47.738106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.513 [2024-07-15 13:04:47.738115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.513 [2024-07-15 13:04:47.740007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.513 13:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:20:35.513 [2024-07-15 13:04:47.747975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.513 [2024-07-15 13:04:47.750028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.513 [2024-07-15 13:04:47.750176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.513 [2024-07-15 13:04:47.750207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.513 [2024-07-15 13:04:47.750225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.513 [2024-07-15 13:04:47.750245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.513 [2024-07-15 13:04:47.750261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.513 [2024-07-15 13:04:47.750273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.513 [2024-07-15 13:04:47.750284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.513 [2024-07-15 13:04:47.750300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.513 [2024-07-15 13:04:47.757988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.513 [2024-07-15 13:04:47.758090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.513 [2024-07-15 13:04:47.758113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.513 [2024-07-15 13:04:47.758125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.513 [2024-07-15 13:04:47.758147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.513 [2024-07-15 13:04:47.758165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.513 [2024-07-15 13:04:47.758174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.513 [2024-07-15 13:04:47.758184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.513 [2024-07-15 13:04:47.758200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.513 [2024-07-15 13:04:47.760114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.513 [2024-07-15 13:04:47.760212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.513 [2024-07-15 13:04:47.760236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.513 [2024-07-15 13:04:47.760247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.513 [2024-07-15 13:04:47.760263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.513 [2024-07-15 13:04:47.760279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.760288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.760298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.514 [2024-07-15 13:04:47.760313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.768057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.514 [2024-07-15 13:04:47.768159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.768183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.514 [2024-07-15 13:04:47.768194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.768211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.768227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.768237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.768246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.514 [2024-07-15 13:04:47.768261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.770178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.514 [2024-07-15 13:04:47.770271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.770293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.514 [2024-07-15 13:04:47.770303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.770320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.770335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.770345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.770354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.514 [2024-07-15 13:04:47.770369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.778125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.514 [2024-07-15 13:04:47.778219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.778242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.514 [2024-07-15 13:04:47.778253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.778270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.778286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.778295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.778305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.514 [2024-07-15 13:04:47.778320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.780239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.514 [2024-07-15 13:04:47.780328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.780357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.514 [2024-07-15 13:04:47.780368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.780385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.780400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.780409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.780418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.514 [2024-07-15 13:04:47.780433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.788190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.514 [2024-07-15 13:04:47.788291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.788315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.514 [2024-07-15 13:04:47.788326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.788344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.788360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.788373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.788387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.514 [2024-07-15 13:04:47.788408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.790314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.514 [2024-07-15 13:04:47.790416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.790439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.514 [2024-07-15 13:04:47.790450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.790468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.790483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.790493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.790503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.514 [2024-07-15 13:04:47.790517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.798256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.514 [2024-07-15 13:04:47.798368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.798392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.514 [2024-07-15 13:04:47.798404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.798421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.798436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.798446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.798456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.514 [2024-07-15 13:04:47.798471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.800376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.514 [2024-07-15 13:04:47.800475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.800497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.514 [2024-07-15 13:04:47.800508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.800526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.800541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.800554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.800567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.514 [2024-07-15 13:04:47.800583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.808326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.514 [2024-07-15 13:04:47.808433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.808457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.514 [2024-07-15 13:04:47.808475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.808501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.808522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.808533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.808542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.514 [2024-07-15 13:04:47.808558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.810436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.514 [2024-07-15 13:04:47.810524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.810546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.514 [2024-07-15 13:04:47.810557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.810574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.810589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.514 [2024-07-15 13:04:47.810599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.514 [2024-07-15 13:04:47.810608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.514 [2024-07-15 13:04:47.810623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.514 [2024-07-15 13:04:47.818399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.514 [2024-07-15 13:04:47.818494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.514 [2024-07-15 13:04:47.818517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.514 [2024-07-15 13:04:47.818529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.514 [2024-07-15 13:04:47.818546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.514 [2024-07-15 13:04:47.818562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.818572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.818581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.515 [2024-07-15 13:04:47.818596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.820492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.515 [2024-07-15 13:04:47.820580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.820601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.515 [2024-07-15 13:04:47.820612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.820629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.820644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.820653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.820663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.515 [2024-07-15 13:04:47.820677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.828462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.515 [2024-07-15 13:04:47.828562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.828585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.515 [2024-07-15 13:04:47.828596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.828614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.828630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.828640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.828649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.515 [2024-07-15 13:04:47.828664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.830551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.515 [2024-07-15 13:04:47.830648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.830670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.515 [2024-07-15 13:04:47.830681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.830698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.830713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.830723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.830732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.515 [2024-07-15 13:04:47.830747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.838527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.515 [2024-07-15 13:04:47.838633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.838657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.515 [2024-07-15 13:04:47.838668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.838686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.838701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.838711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.838720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.515 [2024-07-15 13:04:47.838735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.840613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.515 [2024-07-15 13:04:47.840712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.840734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.515 [2024-07-15 13:04:47.840745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.840775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.840794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.840803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.840813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.515 [2024-07-15 13:04:47.840827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.848600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.515 [2024-07-15 13:04:47.848705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.848728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.515 [2024-07-15 13:04:47.848739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.848756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.848787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.848798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.848807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.515 [2024-07-15 13:04:47.848822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.850675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.515 [2024-07-15 13:04:47.850761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.850796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.515 [2024-07-15 13:04:47.850808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.850825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.850840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.850850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.850859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.515 [2024-07-15 13:04:47.850874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.858672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.515 [2024-07-15 13:04:47.858782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.858806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.515 [2024-07-15 13:04:47.858818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.858836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.858851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.858861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.858871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.515 [2024-07-15 13:04:47.858886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.860731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:35.515 [2024-07-15 13:04:47.860831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.860854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238de70 with addr=10.0.0.2, port=4420 00:20:35.515 [2024-07-15 13:04:47.860865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238de70 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.860889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238de70 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.860913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.860926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.860935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:35.515 [2024-07-15 13:04:47.860950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.868735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:35.515 [2024-07-15 13:04:47.868840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.515 [2024-07-15 13:04:47.868863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2347140 with addr=10.0.0.3, port=4420 00:20:35.515 [2024-07-15 13:04:47.868874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347140 is same with the state(5) to be set 00:20:35.515 [2024-07-15 13:04:47.868891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2347140 (9): Bad file descriptor 00:20:35.515 [2024-07-15 13:04:47.868907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:35.515 [2024-07-15 13:04:47.868917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:35.515 [2024-07-15 13:04:47.868926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:35.515 [2024-07-15 13:04:47.868941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:35.515 [2024-07-15 13:04:47.869002] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:20:35.515 [2024-07-15 13:04:47.869025] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:35.515 [2024-07-15 13:04:47.869056] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:35.515 [2024-07-15 13:04:47.869101] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:35.516 [2024-07-15 13:04:47.869119] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:35.516 [2024-07-15 13:04:47.869135] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:35.516 [2024-07-15 13:04:47.955097] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:35.516 [2024-07-15 13:04:47.955165] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.449 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:20:36.706 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:20:36.706 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:36.706 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.706 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.706 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.707 13:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.707 13:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:36.707 13:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:36.707 13:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:20:36.707 13:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:36.707 13:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.707 13:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.707 13:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.707 13:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:20:36.707 [2024-07-15 13:04:49.036081] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:37.636 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.893 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:37.894 [2024-07-15 13:04:50.270330] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:20:37.894 2024/07/15 13:04:50 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:37.894 request: 00:20:37.894 { 00:20:37.894 "method": "bdev_nvme_start_mdns_discovery", 00:20:37.894 "params": { 00:20:37.894 "name": "mdns", 00:20:37.894 "svcname": "_nvme-disc._http", 00:20:37.894 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:37.894 } 00:20:37.894 } 00:20:37.894 Got JSON-RPC error response 00:20:37.894 GoRPCClient: error on JSON-RPC call 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.894 13:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:20:38.484 [2024-07-15 13:04:50.859056] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:38.743 [2024-07-15 13:04:50.959048] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:38.743 [2024-07-15 13:04:51.059057] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:38.743 [2024-07-15 13:04:51.059099] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:38.743 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:38.743 cookie is 0 00:20:38.743 is_local: 1 00:20:38.743 our_own: 0 00:20:38.743 wide_area: 0 00:20:38.743 multicast: 1 00:20:38.743 cached: 1 00:20:38.743 [2024-07-15 13:04:51.159055] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:38.743 [2024-07-15 13:04:51.159101] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:38.743 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:38.743 cookie is 0 00:20:38.743 is_local: 1 00:20:38.743 our_own: 0 00:20:38.743 wide_area: 0 00:20:38.743 multicast: 1 00:20:38.743 cached: 1 00:20:38.743 [2024-07-15 13:04:51.159117] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:39.000 [2024-07-15 13:04:51.259072] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:39.000 [2024-07-15 13:04:51.259118] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:39.000 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:39.000 cookie is 0 00:20:39.000 is_local: 1 00:20:39.000 our_own: 0 00:20:39.000 wide_area: 0 00:20:39.000 multicast: 1 00:20:39.000 cached: 1 00:20:39.000 [2024-07-15 13:04:51.359059] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:39.000 [2024-07-15 13:04:51.359106] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:39.000 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:39.000 cookie is 0 00:20:39.000 is_local: 1 00:20:39.000 our_own: 0 00:20:39.000 wide_area: 0 00:20:39.000 multicast: 1 00:20:39.000 cached: 1 00:20:39.000 [2024-07-15 13:04:51.359122] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:39.935 [2024-07-15 13:04:52.069274] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:39.935 [2024-07-15 13:04:52.069320] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:39.935 [2024-07-15 13:04:52.069342] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:39.935 [2024-07-15 13:04:52.155436] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:20:39.935 [2024-07-15 13:04:52.215846] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:39.935 [2024-07-15 13:04:52.215897] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:39.935 [2024-07-15 13:04:52.269253] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:39.935 [2024-07-15 13:04:52.269300] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:39.935 [2024-07-15 13:04:52.269322] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:39.935 [2024-07-15 13:04:52.355414] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:20:40.193 [2024-07-15 13:04:52.415783] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:40.193 [2024-07-15 13:04:52.415832] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.478 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.479 [2024-07-15 13:04:55.466109] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:20:43.479 2024/07/15 13:04:55 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:43.479 request: 00:20:43.479 { 00:20:43.479 "method": "bdev_nvme_start_mdns_discovery", 00:20:43.479 "params": { 00:20:43.479 "name": "cdc", 00:20:43.479 "svcname": "_nvme-disc._tcp", 00:20:43.479 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:43.479 } 00:20:43.479 } 00:20:43.479 Got JSON-RPC error response 00:20:43.479 GoRPCClient: error on JSON-RPC call 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94177 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94177 00:20:43.479 [2024-07-15 13:04:55.673993] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94187 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # nvmfcleanup 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:20:43.479 Got SIGTERM, quitting. 00:20:43.479 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:43.479 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:43.479 avahi-daemon 0.8 exiting. 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.479 rmmod nvme_tcp 00:20:43.479 rmmod nvme_fabrics 00:20:43.479 rmmod nvme_keyring 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # '[' -n 94127 ']' 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@494 -- # killprocess 94127 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94127 ']' 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94127 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94127 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:43.479 killing process with pid 94127 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94127' 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94127 00:20:43.479 13:04:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94127 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@282 -- # remove_spdk_ns 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:20:43.738 00:20:43.738 real 0m19.886s 00:20:43.738 user 0m38.848s 00:20:43.738 sys 0m1.890s 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.738 ************************************ 00:20:43.738 13:04:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.738 END TEST nvmf_mdns_discovery 00:20:43.738 ************************************ 00:20:43.738 13:04:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:43.738 13:04:56 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ 1 -eq 1 ]] 00:20:43.738 13:04:56 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:43.738 13:04:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:43.738 13:04:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:43.738 13:04:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.738 ************************************ 00:20:43.738 START TEST nvmf_host_multipath 00:20:43.738 ************************************ 00:20:43.738 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:43.997 * Looking for test storage... 00:20:43.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.997 13:04:56 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:43.998 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@452 -- # prepare_net_devs 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # local -g is_hw=no 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # remove_spdk_ns 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@436 -- # nvmf_veth_init 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:20:43.998 Cannot find device "nvmf_tgt_br" 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.998 Cannot find device "nvmf_tgt_br2" 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # true 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:20:43.998 Cannot find device "nvmf_tgt_br" 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:20:43.998 Cannot find device "nvmf_tgt_br2" 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.998 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:20:44.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:20:44.257 00:20:44.257 --- 10.0.0.2 ping statistics --- 00:20:44.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.257 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:20:44.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:44.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:20:44.257 00:20:44.257 --- 10.0.0.3 ping statistics --- 00:20:44.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.257 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:44.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:20:44.257 00:20:44.257 --- 10.0.0.1 ping statistics --- 00:20:44.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.257 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@437 -- # return 0 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@485 -- # nvmfpid=94745 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@486 -- # waitforlisten 94745 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94745 ']' 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.257 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.258 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.258 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:44.258 [2024-07-15 13:04:56.709381] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:20:44.258 [2024-07-15 13:04:56.709466] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.516 [2024-07-15 13:04:56.845345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:44.516 [2024-07-15 13:04:56.913528] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.516 [2024-07-15 13:04:56.913796] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.516 [2024-07-15 13:04:56.913956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.516 [2024-07-15 13:04:56.914101] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.516 [2024-07-15 13:04:56.914144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.516 [2024-07-15 13:04:56.914341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.516 [2024-07-15 13:04:56.914344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.774 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.774 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:44.774 13:04:56 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:20:44.774 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.774 13:04:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:44.774 13:04:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.774 13:04:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94745 00:20:44.774 13:04:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:45.031 [2024-07-15 13:04:57.315554] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.031 13:04:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:45.289 Malloc0 00:20:45.289 13:04:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:45.547 13:04:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:46.113 13:04:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:46.371 [2024-07-15 13:04:58.600499] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.371 13:04:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:46.629 [2024-07-15 13:04:58.876633] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:46.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94838 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94838 /var/tmp/bdevperf.sock 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94838 ']' 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.629 13:04:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:47.563 13:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.563 13:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:47.563 13:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:47.820 13:05:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:48.385 Nvme0n1 00:20:48.385 13:05:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:48.642 Nvme0n1 00:20:48.642 13:05:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:48.642 13:05:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:49.574 13:05:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:49.574 13:05:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:49.832 13:05:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:50.090 13:05:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:50.090 13:05:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94925 00:20:50.090 13:05:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:50.090 13:05:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94745 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:56.639 Attaching 4 probes... 00:20:56.639 @path[10.0.0.2, 4421]: 17182 00:20:56.639 @path[10.0.0.2, 4421]: 17417 00:20:56.639 @path[10.0.0.2, 4421]: 17497 00:20:56.639 @path[10.0.0.2, 4421]: 17334 00:20:56.639 @path[10.0.0.2, 4421]: 17139 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94925 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:56.639 13:05:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:56.898 13:05:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:57.154 13:05:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:57.154 13:05:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95060 00:20:57.154 13:05:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94745 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:57.154 13:05:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:03.704 Attaching 4 probes... 00:21:03.704 @path[10.0.0.2, 4420]: 17251 00:21:03.704 @path[10.0.0.2, 4420]: 17346 00:21:03.704 @path[10.0.0.2, 4420]: 17338 00:21:03.704 @path[10.0.0.2, 4420]: 17552 00:21:03.704 @path[10.0.0.2, 4420]: 17427 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95060 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:03.704 13:05:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:03.960 13:05:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:03.960 13:05:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95192 00:21:03.960 13:05:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94745 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:03.960 13:05:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:10.515 Attaching 4 probes... 00:21:10.515 @path[10.0.0.2, 4421]: 13911 00:21:10.515 @path[10.0.0.2, 4421]: 17187 00:21:10.515 @path[10.0.0.2, 4421]: 17117 00:21:10.515 @path[10.0.0.2, 4421]: 16936 00:21:10.515 @path[10.0.0.2, 4421]: 17275 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95192 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:10.515 13:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:10.793 13:05:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:10.793 13:05:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94745 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:10.793 13:05:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95322 00:21:10.793 13:05:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:17.366 Attaching 4 probes... 00:21:17.366 00:21:17.366 00:21:17.366 00:21:17.366 00:21:17.366 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95322 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:17.366 13:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:17.624 13:05:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:17.624 13:05:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95455 00:21:17.624 13:05:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94745 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:17.624 13:05:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:24.186 Attaching 4 probes... 00:21:24.186 @path[10.0.0.2, 4421]: 16502 00:21:24.186 @path[10.0.0.2, 4421]: 17079 00:21:24.186 @path[10.0.0.2, 4421]: 16877 00:21:24.186 @path[10.0.0.2, 4421]: 16957 00:21:24.186 @path[10.0.0.2, 4421]: 16861 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:24.186 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95455 00:21:24.187 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:24.187 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:24.187 [2024-07-15 13:05:36.566895] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3330 is same with the state(5) to be set 00:21:24.187 [2024-07-15 13:05:36.566946] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3330 is same with the state(5) to be set 00:21:24.187 [2024-07-15 13:05:36.566958] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3330 is same with the state(5) to be set 00:21:24.187 [2024-07-15 13:05:36.566967] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3330 is same with the state(5) to be set 00:21:24.187 [2024-07-15 13:05:36.566975] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3330 is same with the state(5) to be set 00:21:24.187 [2024-07-15 13:05:36.566983] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3330 is same with the state(5) to be set 00:21:24.187 [2024-07-15 13:05:36.566992] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3330 is same with the state(5) to be set 00:21:24.187 [2024-07-15 13:05:36.567000] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3330 is same with the state(5) to be set 00:21:24.187 [2024-07-15 13:05:36.567008] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3330 is same with the state(5) to be set 00:21:24.187 13:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:25.149 13:05:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:25.149 13:05:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95592 00:21:25.149 13:05:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94745 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:25.149 13:05:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:31.706 Attaching 4 probes... 00:21:31.706 @path[10.0.0.2, 4420]: 15848 00:21:31.706 @path[10.0.0.2, 4420]: 16972 00:21:31.706 @path[10.0.0.2, 4420]: 16604 00:21:31.706 @path[10.0.0.2, 4420]: 16573 00:21:31.706 @path[10.0.0.2, 4420]: 16296 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95592 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:31.706 13:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.964 [2024-07-15 13:05:44.193513] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.964 13:05:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:32.222 13:05:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:38.780 13:05:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:38.780 13:05:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95779 00:21:38.780 13:05:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94745 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:38.780 13:05:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:44.070 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:44.070 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:44.663 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:44.663 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:44.663 Attaching 4 probes... 00:21:44.663 @path[10.0.0.2, 4421]: 14996 00:21:44.664 @path[10.0.0.2, 4421]: 16255 00:21:44.664 @path[10.0.0.2, 4421]: 15133 00:21:44.664 @path[10.0.0.2, 4421]: 15190 00:21:44.664 @path[10.0.0.2, 4421]: 16346 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95779 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94838 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94838 ']' 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94838 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94838 00:21:44.664 killing process with pid 94838 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94838' 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94838 00:21:44.664 13:05:56 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94838 00:21:44.664 Connection closed with partial response: 00:21:44.664 00:21:44.664 00:21:44.664 13:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94838 00:21:44.664 13:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:44.664 [2024-07-15 13:04:58.962146] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:21:44.664 [2024-07-15 13:04:58.962316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94838 ] 00:21:44.664 [2024-07-15 13:04:59.101547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.664 [2024-07-15 13:04:59.189716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.664 Running I/O for 90 seconds... 00:21:44.664 [2024-07-15 13:05:09.400270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.400336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.400373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.400392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.400415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.400430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.400451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.400466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.400820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.400843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.400867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.400882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.400904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.400918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.400939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.400953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.400975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.400989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.401025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.401086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.401128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.401164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.401200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.401237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.401274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.401310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.401346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.401368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.401383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.403278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.403327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.403363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.403399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.403449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.664 [2024-07-15 13:05:09.403486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.403967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.403982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.664 [2024-07-15 13:05:09.404331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.664 [2024-07-15 13:05:09.404353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.404785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.404810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.405554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.405966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.405981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.406978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.406999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.407018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.407040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.407055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.407076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.665 [2024-07-15 13:05:09.407091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.407112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.407127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.407149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.407164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.407185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.407200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.407225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.407240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.407270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.407288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.407309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.665 [2024-07-15 13:05:09.407324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.665 [2024-07-15 13:05:09.407345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.666 [2024-07-15 13:05:09.407361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.407382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.407405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.407430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.407447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.408973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.408994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.666 [2024-07-15 13:05:09.409668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.666 [2024-07-15 13:05:09.409704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.666 [2024-07-15 13:05:09.409740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.666 [2024-07-15 13:05:09.409791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.666 [2024-07-15 13:05:09.409828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.666 [2024-07-15 13:05:09.409879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.666 [2024-07-15 13:05:09.409903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.409918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.409940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.409954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.409976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.409991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.410951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.410966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.411808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.411836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.411863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.667 [2024-07-15 13:05:09.411880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.411903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.411918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.411940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.411955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.411976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.411991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.667 [2024-07-15 13:05:09.412932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.667 [2024-07-15 13:05:09.412948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.412970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.412985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.413358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.413379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.421975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.422051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.668 [2024-07-15 13:05:09.422078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.422110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.668 [2024-07-15 13:05:09.422131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.422180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.668 [2024-07-15 13:05:09.422203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.422233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.668 [2024-07-15 13:05:09.422253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.422283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.668 [2024-07-15 13:05:09.422303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.422332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.668 [2024-07-15 13:05:09.422353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.422382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.668 [2024-07-15 13:05:09.422402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.422447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.422462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.423966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.423987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.668 [2024-07-15 13:05:09.424603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.668 [2024-07-15 13:05:09.424618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.424654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.424690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.424725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.424773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.424824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.424860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.424895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.424932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.424967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.424988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.425965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.425986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.426001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.426022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.426037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.426844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.426872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.426900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.426917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.426938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.669 [2024-07-15 13:05:09.427000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.669 [2024-07-15 13:05:09.427482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.669 [2024-07-15 13:05:09.427519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.427966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.427993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.428838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.670 [2024-07-15 13:05:09.428883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.670 [2024-07-15 13:05:09.428928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.428964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.670 [2024-07-15 13:05:09.428983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.429009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.670 [2024-07-15 13:05:09.429027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.429054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.670 [2024-07-15 13:05:09.429072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.429098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.670 [2024-07-15 13:05:09.429117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.429144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.670 [2024-07-15 13:05:09.429163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.429932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.429965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.429998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.430959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.430978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.431004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.431023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.431049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.431068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.431095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.670 [2024-07-15 13:05:09.431113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.670 [2024-07-15 13:05:09.431140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.431973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.431991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.432036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.432081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.432962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.432981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.433561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.433580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.434637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.434682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.434717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.434738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.434782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.434805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.434833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.671 [2024-07-15 13:05:09.434852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.434880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.434898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.434925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.434944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.434971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.434989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.435015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.435034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.671 [2024-07-15 13:05:09.435061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.671 [2024-07-15 13:05:09.435079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.435965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.435992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.436746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.672 [2024-07-15 13:05:09.436810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.672 [2024-07-15 13:05:09.436856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.672 [2024-07-15 13:05:09.436901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.672 [2024-07-15 13:05:09.436956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.436983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.672 [2024-07-15 13:05:09.437002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.437029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.672 [2024-07-15 13:05:09.437047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.437074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.672 [2024-07-15 13:05:09.437093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.437119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.437138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.437164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.437183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.437209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.437228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.437254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.437272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.437298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.437317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.437344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.437362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.437389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.437407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.438242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.438275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.438321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.438344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.672 [2024-07-15 13:05:09.438371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.672 [2024-07-15 13:05:09.438390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.438972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.438999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.439966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.439987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.673 [2024-07-15 13:05:09.440002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.673 [2024-07-15 13:05:09.440897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.673 [2024-07-15 13:05:09.440919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.440941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.440963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.440978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.441000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.441015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.441035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.441050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.441071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.441086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.441107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.441122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.441144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.441159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.441906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.441934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.441960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.441978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.441999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.442015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.442052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.442088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.442978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.442993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.443640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.443676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.443716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.443752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.443803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.443840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.674 [2024-07-15 13:05:09.443875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.674 [2024-07-15 13:05:09.443897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.674 [2024-07-15 13:05:09.443912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.443932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.443947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.443971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.443994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.444017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.444032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.444053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.444067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.444089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.444104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.444759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.444801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.444829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.444846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.444868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.444884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.444905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.444920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.444940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.444955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.444977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.444992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.445967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.445988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.446003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.446046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.446084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.446120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.446156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.446192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.675 [2024-07-15 13:05:09.446228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-15 13:05:09.446264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-15 13:05:09.446299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-15 13:05:09.446335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-15 13:05:09.446371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-15 13:05:09.446407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-15 13:05:09.446442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-15 13:05:09.446485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.675 [2024-07-15 13:05:09.446524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.675 [2024-07-15 13:05:09.446545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.446978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.446999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.447022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.447043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.447058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.447078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.447093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.447114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.447129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.447150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.447165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.447186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.447200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.447222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.447236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.447257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.447283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.447310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.447326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.455818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.455856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.456803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.456840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.456902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.456924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.456947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.456962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.456984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.456999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.457035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.676 [2024-07-15 13:05:09.457070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.457967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.457982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.458004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.458025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.458046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.458061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.458082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.458097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.458118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.458132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.458154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.458168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.458189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.458204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.458225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.458240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.458261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.458276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.676 [2024-07-15 13:05:09.458305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.676 [2024-07-15 13:05:09.458321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:09.458614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:09.458650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:09.458685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:09.458721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:09.458777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:09.458818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:09.458854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.458983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.458998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.459020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.459035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:09.459869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:09.459899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.947844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.947923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.947993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.948020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.948067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.948139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.948188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.948232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.948276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.948320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.948856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.948912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.948957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.948983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.949957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.949984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.677 [2024-07-15 13:05:15.950003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.950029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.950047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.950074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.950092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.950118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.950136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.950163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.950181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.950208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.950226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.950252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.950270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.677 [2024-07-15 13:05:15.950296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.677 [2024-07-15 13:05:15.950314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.950371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.950966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.950984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.951028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.951073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.951117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.951956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.951975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.952658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.952708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.952759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.952831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.952882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.952932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.952964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.952982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.953014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.953033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.953064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.678 [2024-07-15 13:05:15.953093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.953127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.953146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.953179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.953197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.678 [2024-07-15 13:05:15.953229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.678 [2024-07-15 13:05:15.953247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.953962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.953980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.954030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.954081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.954131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.954181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.954232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:15.954282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.954951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.954969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.955002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.955029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.955063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.955083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:15.955115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:15.955134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.679 [2024-07-15 13:05:23.139198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.139968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.139982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.140003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.140018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.140039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.140054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.140075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.140090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.140123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.140152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.140173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.679 [2024-07-15 13:05:23.140188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.679 [2024-07-15 13:05:23.140210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.140246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.140283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.140318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.140354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.140391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.140426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.140462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.140498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.140535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.140550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.141965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.141989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.142011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.142027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.142049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.142064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.142085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.142100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.142121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.142136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.142158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.142173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.142194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.142209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.143962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.143977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.144000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.144015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.144046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.144067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.144098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.680 [2024-07-15 13:05:23.144115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.680 [2024-07-15 13:05:23.144137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.144537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.144560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.145605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.681 [2024-07-15 13:05:23.145652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.145691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.145728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.145782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.145824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.145868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.145904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.145940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.145977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.145999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.146886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.146901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.147320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.147348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.147376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.147394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.147428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.147446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.147468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.147483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.147504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.147520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.147541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.147556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.147578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.681 [2024-07-15 13:05:23.147593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.681 [2024-07-15 13:05:23.147614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.147650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.147687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.147724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.147781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.147824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.147861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.147898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.147945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.147982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.147998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.148971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.148986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.149720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.149735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.168734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.168806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.169695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.169728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.169758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.169795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.169819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.169835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.682 [2024-07-15 13:05:23.169857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.682 [2024-07-15 13:05:23.169873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.169894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.169909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.169930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.169946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.169967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.169982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.683 [2024-07-15 13:05:23.170949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.170970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.170985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.171976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.171996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.172026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.172047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.172076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.172097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.172126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.172147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.172177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.172198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.172227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.172248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.172277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.172308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.172340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.172361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.172392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.683 [2024-07-15 13:05:23.172414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.683 [2024-07-15 13:05:23.173440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.173515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.173569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.173621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.173671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.173722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.173790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.173846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.173902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.173953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.173973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.174968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.174989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.175962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.175993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.176025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.176047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.176077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.176097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.176127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.176148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.176178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.176198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.176228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.684 [2024-07-15 13:05:23.176249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.684 [2024-07-15 13:05:23.176278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.176865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.176886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.178950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.178971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.685 [2024-07-15 13:05:23.179809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.179943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.179964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.685 [2024-07-15 13:05:23.180679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.685 [2024-07-15 13:05:23.180700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.180730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.180751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.180801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.180824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.180855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.180875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.180905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.180926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.180955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.180976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.181005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.181026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.181056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.181077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.181121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.181151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.181181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.181202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.181232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.181252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.181282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.181303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.181333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.181354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.181383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.181404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.181435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.181456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.182570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.182610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.182651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.182675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.182705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.182726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.182756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.182798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.182831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.182853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.182883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.182920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.182953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.182974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.183984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.183999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.686 [2024-07-15 13:05:23.184717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.686 [2024-07-15 13:05:23.184739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.184754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.184786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.184806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.184828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.184845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.184866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.184882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.184903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.184918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.184940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.184955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.184976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.184992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.185432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.185448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.186981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.186996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.687 [2024-07-15 13:05:23.187540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.687 [2024-07-15 13:05:23.187555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.688 [2024-07-15 13:05:23.187592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.187981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.187998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.188687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.188703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.189979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.189995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.688 [2024-07-15 13:05:23.190622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.688 [2024-07-15 13:05:23.190637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.190658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.190673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.190694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.190710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.190731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.190746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.190779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.190798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.190820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.190836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.190857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.190872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.190893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.190908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.190929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.190952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.190975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.190991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.191964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.191980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.192748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.192797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.192827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.192845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.192866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.192882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.192903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.192919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.192940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.192955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.192976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.192992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.689 [2024-07-15 13:05:23.193709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.689 [2024-07-15 13:05:23.193730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.193746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.193781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.193800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.193823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.193838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.193860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.193875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.193896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.193912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.193933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.193949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.193970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.193985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.690 [2024-07-15 13:05:23.194095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.194969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.194985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.195006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.195022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.195043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.195067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.195090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.195105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.195127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.195142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.195164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.195190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.195960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.195990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.690 [2024-07-15 13:05:23.196528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.690 [2024-07-15 13:05:23.196549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.196983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.196999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.197969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.197985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.198006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.691 [2024-07-15 13:05:23.198021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.691 [2024-07-15 13:05:23.198042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.198078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.198115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.198151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.198197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.198234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.198270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.198306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.198343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.198380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.198395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.199971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.199986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.200023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.200060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.200107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.200144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.200203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.200246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.200301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.200348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.692 [2024-07-15 13:05:23.200384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.692 [2024-07-15 13:05:23.200406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.693 [2024-07-15 13:05:23.200616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.200978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.200999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.201635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.201651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.202406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.202435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.202463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.202492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.202513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.202529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.202550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.202566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.202599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.202617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.202638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.202655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.693 [2024-07-15 13:05:23.202676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.693 [2024-07-15 13:05:23.202692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.202713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.202728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.202749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.202781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.202807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.202822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.202843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.202859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.202880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.202895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.202916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.202932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.202953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.202968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.202989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.203964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.694 [2024-07-15 13:05:23.203980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.694 [2024-07-15 13:05:23.204001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.204884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.204900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.205653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.205680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.205707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.205737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.205776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.205797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.205820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.205836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.205856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.205872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.205893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.205909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.205931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.205946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.205968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.205983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.206005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.206021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.206042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.206057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.206078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.206094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.206115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.206130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.206151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.206166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.206187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.206203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.206234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.206251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.206272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.206288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.695 [2024-07-15 13:05:23.206310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.695 [2024-07-15 13:05:23.206325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.206957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.206973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.696 [2024-07-15 13:05:23.207091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.207959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.207984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.208000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.208021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.208037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.696 [2024-07-15 13:05:23.208058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.696 [2024-07-15 13:05:23.208073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.208095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.208110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.208878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.208907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.208934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.208953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.208976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.208992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.209976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.209992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.697 [2024-07-15 13:05:23.210407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.697 [2024-07-15 13:05:23.210428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.210968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.210983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.211004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.211020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.211041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.211056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.211078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.211093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.211114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.211129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.211151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.211166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.211187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.211203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.211232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.211249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.211281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.211299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.211322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.211338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.212093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.212122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.212149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.212167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.212189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.212205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.212226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.212241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.698 [2024-07-15 13:05:23.212263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.698 [2024-07-15 13:05:23.212278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.212975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.699 [2024-07-15 13:05:23.212991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:44.699 [2024-07-15 13:05:23.213012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.700 [2024-07-15 13:05:23.213555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.213969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.213984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.214005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.214020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.214041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.214056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.214078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.214093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.214115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.214130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.214151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.214166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:44.700 [2024-07-15 13:05:23.220868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.700 [2024-07-15 13:05:23.220891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.220919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.220935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.220962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.220978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.221967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.221993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.701 [2024-07-15 13:05:23.222777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.701 [2024-07-15 13:05:23.222797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.222824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.222848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.222876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.222893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.222919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.222935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.222962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.222978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:23.223882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:23.223908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.702 [2024-07-15 13:05:36.568572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.702 [2024-07-15 13:05:36.568621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.702 [2024-07-15 13:05:36.568651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.702 [2024-07-15 13:05:36.568681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.702 [2024-07-15 13:05:36.568736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.702 [2024-07-15 13:05:36.568783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.702 [2024-07-15 13:05:36.568819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.702 [2024-07-15 13:05:36.568848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.702 [2024-07-15 13:05:36.568881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:36.568910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:36.568939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:36.568968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.568984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:36.568997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.569012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:36.569026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.569041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:36.569055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.569071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:36.569084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.702 [2024-07-15 13:05:36.569100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.702 [2024-07-15 13:05:36.569140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.569977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.569992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.703 [2024-07-15 13:05:36.570431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.703 [2024-07-15 13:05:36.570446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.570981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.570995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.704 [2024-07-15 13:05:36.571550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.704 [2024-07-15 13:05:36.571605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54144 len:8 PRP1 0x0 PRP2 0x0 00:21:44.704 [2024-07-15 13:05:36.571619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.704 [2024-07-15 13:05:36.571647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.704 [2024-07-15 13:05:36.571659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54152 len:8 PRP1 0x0 PRP2 0x0 00:21:44.704 [2024-07-15 13:05:36.571672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.704 [2024-07-15 13:05:36.571703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.704 [2024-07-15 13:05:36.571715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54160 len:8 PRP1 0x0 PRP2 0x0 00:21:44.704 [2024-07-15 13:05:36.571728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.704 [2024-07-15 13:05:36.571752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.704 [2024-07-15 13:05:36.571777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54168 len:8 PRP1 0x0 PRP2 0x0 00:21:44.704 [2024-07-15 13:05:36.571794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.704 [2024-07-15 13:05:36.571809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.704 [2024-07-15 13:05:36.571820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.571830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54176 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.571843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.571857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.571867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.571877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54184 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.571890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.571904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.571913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.571924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54192 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.571937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.571951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.571960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.571971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54200 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.571984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.571997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54208 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54216 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54224 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54232 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54240 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54248 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54256 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54264 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54272 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54280 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54288 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54296 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54304 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54312 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54320 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54328 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54336 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54344 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54352 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.572960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54360 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.572973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.572987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.572997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.705 [2024-07-15 13:05:36.573007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54368 len:8 PRP1 0x0 PRP2 0x0 00:21:44.705 [2024-07-15 13:05:36.573020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.705 [2024-07-15 13:05:36.573033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.705 [2024-07-15 13:05:36.573043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54376 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.706 [2024-07-15 13:05:36.573090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53432 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.706 [2024-07-15 13:05:36.573136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53440 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.706 [2024-07-15 13:05:36.573185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53448 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.706 [2024-07-15 13:05:36.573238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53456 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.706 [2024-07-15 13:05:36.573285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53464 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.706 [2024-07-15 13:05:36.573334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53472 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.706 [2024-07-15 13:05:36.573380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53480 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.706 [2024-07-15 13:05:36.573427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53488 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:44.706 [2024-07-15 13:05:36.573473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:44.706 [2024-07-15 13:05:36.573484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53496 len:8 PRP1 0x0 PRP2 0x0 00:21:44.706 [2024-07-15 13:05:36.573496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573543] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1950500 was disconnected and freed. reset controller. 00:21:44.706 [2024-07-15 13:05:36.573654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.706 [2024-07-15 13:05:36.573681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.706 [2024-07-15 13:05:36.573711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.706 [2024-07-15 13:05:36.573749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.706 [2024-07-15 13:05:36.573800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.706 [2024-07-15 13:05:36.573814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1c4d0 is same with the state(5) to be set 00:21:44.706 [2024-07-15 13:05:36.575520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:44.706 [2024-07-15 13:05:36.575567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1c4d0 (9): Bad file descriptor 00:21:44.706 [2024-07-15 13:05:36.575697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.706 [2024-07-15 13:05:36.575730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1c4d0 with addr=10.0.0.2, port=4421 00:21:44.706 [2024-07-15 13:05:36.575748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1c4d0 is same with the state(5) to be set 00:21:44.706 [2024-07-15 13:05:36.575791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1c4d0 (9): Bad file descriptor 00:21:44.706 [2024-07-15 13:05:36.575819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:44.706 [2024-07-15 13:05:36.575834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:44.706 [2024-07-15 13:05:36.575848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:44.706 [2024-07-15 13:05:36.575897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:44.706 [2024-07-15 13:05:36.575922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:44.706 [2024-07-15 13:05:46.649902] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:44.706 Received shutdown signal, test time was about 55.765814 seconds 00:21:44.706 00:21:44.706 Latency(us) 00:21:44.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.706 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.706 Verification LBA range: start 0x0 length 0x4000 00:21:44.706 Nvme0n1 : 55.76 7187.97 28.08 0.00 0.00 17776.71 558.55 7107438.78 00:21:44.706 =================================================================================================================== 00:21:44.706 Total : 7187.97 28.08 0.00 0.00 17776.71 558.55 7107438.78 00:21:44.706 13:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # nvmfcleanup 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:44.964 rmmod nvme_tcp 00:21:44.964 rmmod nvme_fabrics 00:21:44.964 rmmod nvme_keyring 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@493 -- # '[' -n 94745 ']' 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@494 -- # killprocess 94745 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94745 ']' 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94745 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.964 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94745 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.221 killing process with pid 94745 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94745' 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94745 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94745 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@282 -- # remove_spdk_ns 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:21:45.221 00:21:45.221 real 1m1.464s 00:21:45.221 user 2m55.617s 00:21:45.221 sys 0m13.186s 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.221 13:05:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:45.221 ************************************ 00:21:45.221 END TEST nvmf_host_multipath 00:21:45.221 ************************************ 00:21:45.221 13:05:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:45.221 13:05:57 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:45.221 13:05:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:45.221 13:05:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.221 13:05:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:45.479 ************************************ 00:21:45.479 START TEST nvmf_timeout 00:21:45.479 ************************************ 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:45.479 * Looking for test storage... 00:21:45.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.479 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.479 13:05:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@452 -- # prepare_net_devs 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # local -g is_hw=no 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # remove_spdk_ns 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@436 -- # nvmf_veth_init 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:21:45.480 Cannot find device "nvmf_tgt_br" 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:21:45.480 Cannot find device "nvmf_tgt_br2" 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # true 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:21:45.480 Cannot find device "nvmf_tgt_br" 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:21:45.480 Cannot find device "nvmf_tgt_br2" 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:45.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:45.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:45.480 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:45.738 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:45.738 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:45.738 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:45.738 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:21:45.738 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:21:45.738 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:21:45.738 13:05:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:21:45.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:21:45.738 00:21:45.738 --- 10.0.0.2 ping statistics --- 00:21:45.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.738 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:21:45.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:45.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:21:45.738 00:21:45.738 --- 10.0.0.3 ping statistics --- 00:21:45.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.738 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:45.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:21:45.738 00:21:45.738 --- 10.0.0.1 ping statistics --- 00:21:45.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.738 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@437 -- # return 0 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@485 -- # nvmfpid=96103 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@486 -- # waitforlisten 96103 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96103 ']' 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.738 13:05:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.738 [2024-07-15 13:05:58.201647] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:21:45.738 [2024-07-15 13:05:58.201796] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.996 [2024-07-15 13:05:58.341559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:45.996 [2024-07-15 13:05:58.400361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.996 [2024-07-15 13:05:58.400420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.996 [2024-07-15 13:05:58.400431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.996 [2024-07-15 13:05:58.400440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.996 [2024-07-15 13:05:58.400447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.996 [2024-07-15 13:05:58.400694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.996 [2024-07-15 13:05:58.400706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.926 13:05:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.926 13:05:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:46.926 13:05:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:21:46.926 13:05:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.926 13:05:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:46.926 13:05:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.926 13:05:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.926 13:05:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:47.489 [2024-07-15 13:05:59.752631] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.489 13:05:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:47.746 Malloc0 00:21:47.746 13:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.310 13:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.567 13:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.133 [2024-07-15 13:06:01.313045] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.133 13:06:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96204 00:21:49.133 13:06:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:49.133 13:06:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96204 /var/tmp/bdevperf.sock 00:21:49.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.133 13:06:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96204 ']' 00:21:49.133 13:06:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.133 13:06:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.133 13:06:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.133 13:06:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.133 13:06:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.133 [2024-07-15 13:06:01.405925] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:21:49.133 [2024-07-15 13:06:01.406062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96204 ] 00:21:49.133 [2024-07-15 13:06:01.567580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.390 [2024-07-15 13:06:01.652393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.322 13:06:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.322 13:06:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:50.322 13:06:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:50.579 13:06:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:50.836 NVMe0n1 00:21:51.093 13:06:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96253 00:21:51.094 13:06:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.094 13:06:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:51.094 Running I/O for 10 seconds... 00:21:52.025 13:06:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.286 [2024-07-15 13:06:04.654614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.654687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.654723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.654739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.654777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.654796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.654817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.654833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.654853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.654867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.654887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.654900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.654921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.654936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.654958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.654973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.654993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.655006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.286 [2024-07-15 13:06:04.655041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.286 [2024-07-15 13:06:04.655076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.286 [2024-07-15 13:06:04.655113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.286 [2024-07-15 13:06:04.655148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.286 [2024-07-15 13:06:04.655182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.286 [2024-07-15 13:06:04.655216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.286 [2024-07-15 13:06:04.655250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.286 [2024-07-15 13:06:04.655286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.655337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.286 [2024-07-15 13:06:04.655371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.286 [2024-07-15 13:06:04.655392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.655962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.655991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.656027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.656062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.656097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.656132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.656167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.656202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.287 [2024-07-15 13:06:04.656236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.287 [2024-07-15 13:06:04.656743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.287 [2024-07-15 13:06:04.656774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.656792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.656815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.656830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.656851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.656865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.656884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.656900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.656921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.656936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.656958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.656972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.656994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.657008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.657045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.657082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.657120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.657154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.657188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.657223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.288 [2024-07-15 13:06:04.657257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.657978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.657999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.658012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.658033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.658048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.658069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.658084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.658104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.658118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.658139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.658154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.658174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.658188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.288 [2024-07-15 13:06:04.658209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.288 [2024-07-15 13:06:04.658224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.289 [2024-07-15 13:06:04.658714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.658750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.658799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.658833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.658869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.658903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.658938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.658972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.658992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.659008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.659041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.659077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.659119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.659155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.659189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.289 [2024-07-15 13:06:04.659224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:52.289 [2024-07-15 13:06:04.659285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43864 len:8 PRP1 0x0 PRP2 0x0 00:21:52.289 [2024-07-15 13:06:04.659312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:52.289 [2024-07-15 13:06:04.659346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:52.289 [2024-07-15 13:06:04.659358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43872 len:8 PRP1 0x0 PRP2 0x0 00:21:52.289 [2024-07-15 13:06:04.659372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659436] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x107a8d0 was disconnected and freed. reset controller. 00:21:52.289 [2024-07-15 13:06:04.659549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.289 [2024-07-15 13:06:04.659571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.289 [2024-07-15 13:06:04.659602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.289 [2024-07-15 13:06:04.659632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.289 [2024-07-15 13:06:04.659663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.289 [2024-07-15 13:06:04.659677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d240 is same with the state(5) to be set 00:21:52.289 [2024-07-15 13:06:04.660347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:52.289 [2024-07-15 13:06:04.660384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100d240 (9): Bad file descriptor 00:21:52.289 [2024-07-15 13:06:04.660530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.289 [2024-07-15 13:06:04.660558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100d240 with addr=10.0.0.2, port=4420 00:21:52.289 [2024-07-15 13:06:04.660576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d240 is same with the state(5) to be set 00:21:52.289 [2024-07-15 13:06:04.660605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100d240 (9): Bad file descriptor 00:21:52.289 [2024-07-15 13:06:04.660632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:52.289 [2024-07-15 13:06:04.660649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:52.289 [2024-07-15 13:06:04.660666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:52.289 [2024-07-15 13:06:04.660704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.289 [2024-07-15 13:06:04.660719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:52.289 13:06:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:54.816 [2024-07-15 13:06:06.660963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.816 [2024-07-15 13:06:06.661041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100d240 with addr=10.0.0.2, port=4420 00:21:54.816 [2024-07-15 13:06:06.661058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d240 is same with the state(5) to be set 00:21:54.816 [2024-07-15 13:06:06.661087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100d240 (9): Bad file descriptor 00:21:54.816 [2024-07-15 13:06:06.661106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:54.816 [2024-07-15 13:06:06.661116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:54.816 [2024-07-15 13:06:06.661127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:54.816 [2024-07-15 13:06:06.661156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.816 [2024-07-15 13:06:06.661168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.816 13:06:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:54.816 13:06:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:54.816 13:06:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:54.816 13:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:54.816 13:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:54.816 13:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:54.816 13:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:55.075 13:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:55.075 13:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:56.477 [2024-07-15 13:06:08.661427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.477 [2024-07-15 13:06:08.661515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100d240 with addr=10.0.0.2, port=4420 00:21:56.477 [2024-07-15 13:06:08.661542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d240 is same with the state(5) to be set 00:21:56.477 [2024-07-15 13:06:08.661579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100d240 (9): Bad file descriptor 00:21:56.477 [2024-07-15 13:06:08.661606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:56.477 [2024-07-15 13:06:08.661621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:56.477 [2024-07-15 13:06:08.661637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.477 [2024-07-15 13:06:08.661676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.477 [2024-07-15 13:06:08.661696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:58.378 [2024-07-15 13:06:10.661776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:58.378 [2024-07-15 13:06:10.661849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:58.378 [2024-07-15 13:06:10.661863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:58.378 [2024-07-15 13:06:10.661875] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:58.378 [2024-07-15 13:06:10.661904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:59.311 00:21:59.311 Latency(us) 00:21:59.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.311 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:59.311 Verification LBA range: start 0x0 length 0x4000 00:21:59.311 NVMe0n1 : 8.21 661.59 2.58 15.59 0.00 188717.54 4140.68 7046430.72 00:21:59.311 =================================================================================================================== 00:21:59.311 Total : 661.59 2.58 15.59 0.00 188717.54 4140.68 7046430.72 00:21:59.311 0 00:22:00.242 13:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:00.242 13:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:00.242 13:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.499 13:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:00.499 13:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:00.499 13:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:00.499 13:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:01.063 13:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:01.063 13:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96253 00:22:01.063 13:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96204 00:22:01.063 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96204 ']' 00:22:01.063 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96204 00:22:01.063 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:01.063 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.063 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96204 00:22:01.063 killing process with pid 96204 00:22:01.064 Received shutdown signal, test time was about 9.925240 seconds 00:22:01.064 00:22:01.064 Latency(us) 00:22:01.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.064 =================================================================================================================== 00:22:01.064 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.064 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:01.064 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:01.064 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96204' 00:22:01.064 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96204 00:22:01.064 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96204 00:22:01.064 13:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.625 [2024-07-15 13:06:13.926337] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.625 13:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96411 00:22:01.625 13:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:01.625 13:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96411 /var/tmp/bdevperf.sock 00:22:01.625 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96411 ']' 00:22:01.625 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.625 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.625 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.625 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.625 13:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:01.625 [2024-07-15 13:06:14.022834] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:22:01.625 [2024-07-15 13:06:14.022978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96411 ] 00:22:01.882 [2024-07-15 13:06:14.177345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.882 [2024-07-15 13:06:14.264222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.882 13:06:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.882 13:06:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:01.882 13:06:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:02.446 13:06:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:02.704 NVMe0n1 00:22:02.704 13:06:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96444 00:22:02.704 13:06:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.704 13:06:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:02.962 Running I/O for 10 seconds... 00:22:03.897 13:06:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.157 [2024-07-15 13:06:16.367366] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367447] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367467] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367479] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367488] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367496] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367504] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367514] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367522] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367530] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367538] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367546] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367554] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.157 [2024-07-15 13:06:16.367562] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367570] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367578] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367586] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367594] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367602] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367610] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367619] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367627] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367635] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367643] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367651] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367661] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367669] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367677] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367685] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367693] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367701] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367709] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367717] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367726] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367734] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367743] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367751] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367759] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367783] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367792] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367800] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367808] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367816] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367824] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367832] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367839] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367848] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367856] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367864] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367872] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367880] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367890] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367905] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367917] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367929] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367937] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367946] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367954] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367962] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367970] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367977] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367986] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.367994] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368002] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368011] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368019] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368028] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368036] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368045] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368053] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368061] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368070] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368078] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368086] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368094] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368102] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368110] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368119] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368127] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.158 [2024-07-15 13:06:16.368135] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258cbc0 is same with the state(5) to be set 00:22:04.159 [2024-07-15 13:06:16.370253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.159 [2024-07-15 13:06:16.370880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.159 [2024-07-15 13:06:16.370911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.159 [2024-07-15 13:06:16.370920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.370931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.370941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.370952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.370961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.370972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.370987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.370998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.160 [2024-07-15 13:06:16.371521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.160 [2024-07-15 13:06:16.371542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.160 [2024-07-15 13:06:16.371553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.161 [2024-07-15 13:06:16.371562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.161 [2024-07-15 13:06:16.371582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.161 [2024-07-15 13:06:16.371610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.161 [2024-07-15 13:06:16.371642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.161 [2024-07-15 13:06:16.371680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.161 [2024-07-15 13:06:16.371714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.371981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.371992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.161 [2024-07-15 13:06:16.372258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.161 [2024-07-15 13:06:16.372268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.162 [2024-07-15 13:06:16.372290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.162 [2024-07-15 13:06:16.372311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.162 [2024-07-15 13:06:16.372331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.162 [2024-07-15 13:06:16.372351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.162 [2024-07-15 13:06:16.372372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.162 [2024-07-15 13:06:16.372392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.162 [2024-07-15 13:06:16.372414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.162 [2024-07-15 13:06:16.372435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76928 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76936 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76944 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76952 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76960 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76968 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76976 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76984 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76992 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77000 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77008 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.162 [2024-07-15 13:06:16.372873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.162 [2024-07-15 13:06:16.372881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.162 [2024-07-15 13:06:16.372889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77016 len:8 PRP1 0x0 PRP2 0x0 00:22:04.162 [2024-07-15 13:06:16.372897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.372907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.372914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.372924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77024 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.372933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.372942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.372950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.372957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77032 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.372966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.372975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.372982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.372990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77040 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.372999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.373008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.373015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.373023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77048 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.373032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.373041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.373048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.373056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77056 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.373064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.373074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.373081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.373089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77064 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.373098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.373108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.373115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.373122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77072 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.373131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.373140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.373147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.373155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77080 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.373164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.373173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.373180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.373190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77088 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.373199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.373208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.373217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.373230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77096 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.373245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.373260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.373272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.389755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77104 len:8 PRP1 0x0 PRP2 0x0 00:22:04.163 [2024-07-15 13:06:16.389812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.163 [2024-07-15 13:06:16.389834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.163 [2024-07-15 13:06:16.389844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.163 [2024-07-15 13:06:16.389853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77112 len:8 PRP1 0x0 PRP2 0x0 00:22:04.164 [2024-07-15 13:06:16.389862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.389871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.164 [2024-07-15 13:06:16.389879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.164 [2024-07-15 13:06:16.389887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77120 len:8 PRP1 0x0 PRP2 0x0 00:22:04.164 [2024-07-15 13:06:16.389896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.389905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.164 [2024-07-15 13:06:16.389912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.164 [2024-07-15 13:06:16.389920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77128 len:8 PRP1 0x0 PRP2 0x0 00:22:04.164 [2024-07-15 13:06:16.389929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.389939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.164 [2024-07-15 13:06:16.389946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.164 [2024-07-15 13:06:16.389954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77136 len:8 PRP1 0x0 PRP2 0x0 00:22:04.164 [2024-07-15 13:06:16.389963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.389972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.164 [2024-07-15 13:06:16.389979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.164 [2024-07-15 13:06:16.389986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77144 len:8 PRP1 0x0 PRP2 0x0 00:22:04.164 [2024-07-15 13:06:16.389995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.390010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.164 [2024-07-15 13:06:16.390022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.164 [2024-07-15 13:06:16.390036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77152 len:8 PRP1 0x0 PRP2 0x0 00:22:04.164 [2024-07-15 13:06:16.390050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.390067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.164 [2024-07-15 13:06:16.390078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.164 [2024-07-15 13:06:16.390091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77160 len:8 PRP1 0x0 PRP2 0x0 00:22:04.164 [2024-07-15 13:06:16.390108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.390124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.164 [2024-07-15 13:06:16.390136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.164 [2024-07-15 13:06:16.390148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77168 len:8 PRP1 0x0 PRP2 0x0 00:22:04.164 [2024-07-15 13:06:16.390162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.390177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.164 [2024-07-15 13:06:16.390189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.164 [2024-07-15 13:06:16.390203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77176 len:8 PRP1 0x0 PRP2 0x0 00:22:04.164 [2024-07-15 13:06:16.390217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.390304] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xccf8d0 was disconnected and freed. reset controller. 00:22:04.164 [2024-07-15 13:06:16.390524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.164 [2024-07-15 13:06:16.390554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.390575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.164 [2024-07-15 13:06:16.390586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.390597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.164 [2024-07-15 13:06:16.390606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.390616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.164 [2024-07-15 13:06:16.390625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.164 [2024-07-15 13:06:16.390634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc62240 is same with the state(5) to be set 00:22:04.164 [2024-07-15 13:06:16.390891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.164 [2024-07-15 13:06:16.390918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62240 (9): Bad file descriptor 00:22:04.164 [2024-07-15 13:06:16.391023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.164 [2024-07-15 13:06:16.391046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc62240 with addr=10.0.0.2, port=4420 00:22:04.164 [2024-07-15 13:06:16.391057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc62240 is same with the state(5) to be set 00:22:04.164 [2024-07-15 13:06:16.391075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62240 (9): Bad file descriptor 00:22:04.164 [2024-07-15 13:06:16.391092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.164 [2024-07-15 13:06:16.391102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.164 [2024-07-15 13:06:16.391113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.164 [2024-07-15 13:06:16.391133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.164 [2024-07-15 13:06:16.391144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.164 13:06:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:05.097 [2024-07-15 13:06:17.391290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.097 [2024-07-15 13:06:17.391370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc62240 with addr=10.0.0.2, port=4420 00:22:05.097 [2024-07-15 13:06:17.391387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc62240 is same with the state(5) to be set 00:22:05.097 [2024-07-15 13:06:17.391415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62240 (9): Bad file descriptor 00:22:05.097 [2024-07-15 13:06:17.391434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.097 [2024-07-15 13:06:17.391445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.097 [2024-07-15 13:06:17.391456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.097 [2024-07-15 13:06:17.391484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.097 [2024-07-15 13:06:17.391496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.097 13:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.355 [2024-07-15 13:06:17.732084] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.355 13:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96444 00:22:06.297 [2024-07-15 13:06:18.405853] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.868 00:22:12.868 Latency(us) 00:22:12.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.868 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:12.868 Verification LBA range: start 0x0 length 0x4000 00:22:12.868 NVMe0n1 : 10.01 5881.68 22.98 0.00 0.00 21715.87 2293.76 3035150.89 00:22:12.868 =================================================================================================================== 00:22:12.868 Total : 5881.68 22.98 0.00 0.00 21715.87 2293.76 3035150.89 00:22:12.868 0 00:22:12.868 13:06:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96557 00:22:12.868 13:06:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:12.868 13:06:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:13.126 Running I/O for 10 seconds... 00:22:14.061 13:06:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.322 [2024-07-15 13:06:26.613051] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613116] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613134] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613147] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613160] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613173] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613186] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613198] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613209] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613222] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613234] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613246] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613259] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613271] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613283] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613296] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613309] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613322] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613335] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613348] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613361] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613374] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613386] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613398] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613411] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613424] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613437] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.613451] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5680 is same with the state(5) to be set 00:22:14.322 [2024-07-15 13:06:26.614293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.322 [2024-07-15 13:06:26.614359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.322 [2024-07-15 13:06:26.614753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.322 [2024-07-15 13:06:26.614780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.614987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.614996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.323 [2024-07-15 13:06:26.615675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.323 [2024-07-15 13:06:26.615684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.615985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.615996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.324 [2024-07-15 13:06:26.616421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.324 [2024-07-15 13:06:26.616442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.324 [2024-07-15 13:06:26.616473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.324 [2024-07-15 13:06:26.616507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.324 [2024-07-15 13:06:26.616532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.324 [2024-07-15 13:06:26.616553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.324 [2024-07-15 13:06:26.616573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.324 [2024-07-15 13:06:26.616594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.324 [2024-07-15 13:06:26.616605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.616986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.616996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.617006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.617016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.617028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.617037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.617048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.325 [2024-07-15 13:06:26.617060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.617093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.325 [2024-07-15 13:06:26.617103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.325 [2024-07-15 13:06:26.617112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85064 len:8 PRP1 0x0 PRP2 0x0 00:22:14.325 [2024-07-15 13:06:26.617121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.617175] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xce2a00 was disconnected and freed. reset controller. 00:22:14.325 [2024-07-15 13:06:26.617282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.325 [2024-07-15 13:06:26.617299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.617310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.325 [2024-07-15 13:06:26.617319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.617330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.325 [2024-07-15 13:06:26.617339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.617348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.325 [2024-07-15 13:06:26.617357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.325 [2024-07-15 13:06:26.617367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc62240 is same with the state(5) to be set 00:22:14.325 [2024-07-15 13:06:26.617599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.325 [2024-07-15 13:06:26.617623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62240 (9): Bad file descriptor 00:22:14.325 [2024-07-15 13:06:26.617733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.325 [2024-07-15 13:06:26.617755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc62240 with addr=10.0.0.2, port=4420 00:22:14.325 [2024-07-15 13:06:26.617781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc62240 is same with the state(5) to be set 00:22:14.325 [2024-07-15 13:06:26.617804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62240 (9): Bad file descriptor 00:22:14.325 [2024-07-15 13:06:26.617820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.325 [2024-07-15 13:06:26.617829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:14.325 [2024-07-15 13:06:26.617841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.325 [2024-07-15 13:06:26.617861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.325 [2024-07-15 13:06:26.617872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.325 13:06:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:15.259 [2024-07-15 13:06:27.629465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.259 [2024-07-15 13:06:27.629545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc62240 with addr=10.0.0.2, port=4420 00:22:15.259 [2024-07-15 13:06:27.629564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc62240 is same with the state(5) to be set 00:22:15.259 [2024-07-15 13:06:27.629592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62240 (9): Bad file descriptor 00:22:15.259 [2024-07-15 13:06:27.629613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:15.259 [2024-07-15 13:06:27.629623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:15.259 [2024-07-15 13:06:27.629634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:15.259 [2024-07-15 13:06:27.629663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.259 [2024-07-15 13:06:27.629675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:16.192 [2024-07-15 13:06:28.629850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.192 [2024-07-15 13:06:28.629955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc62240 with addr=10.0.0.2, port=4420 00:22:16.192 [2024-07-15 13:06:28.629981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc62240 is same with the state(5) to be set 00:22:16.192 [2024-07-15 13:06:28.630022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62240 (9): Bad file descriptor 00:22:16.192 [2024-07-15 13:06:28.630052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.192 [2024-07-15 13:06:28.630068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:16.192 [2024-07-15 13:06:28.630084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.192 [2024-07-15 13:06:28.630124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:16.192 [2024-07-15 13:06:28.630144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.565 [2024-07-15 13:06:29.630548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.565 [2024-07-15 13:06:29.630642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc62240 with addr=10.0.0.2, port=4420 00:22:17.565 [2024-07-15 13:06:29.630672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc62240 is same with the state(5) to be set 00:22:17.565 [2024-07-15 13:06:29.631013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62240 (9): Bad file descriptor 00:22:17.565 [2024-07-15 13:06:29.631308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:17.565 [2024-07-15 13:06:29.631352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:17.565 [2024-07-15 13:06:29.631371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.565 [2024-07-15 13:06:29.635412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.565 [2024-07-15 13:06:29.635464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.565 13:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.565 [2024-07-15 13:06:30.033047] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.823 13:06:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96557 00:22:18.388 [2024-07-15 13:06:30.677096] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:23.654 00:22:23.654 Latency(us) 00:22:23.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.654 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.654 Verification LBA range: start 0x0 length 0x4000 00:22:23.654 NVMe0n1 : 10.01 4984.06 19.47 3485.53 0.00 15083.11 741.00 3019898.88 00:22:23.654 =================================================================================================================== 00:22:23.654 Total : 4984.06 19.47 3485.53 0.00 15083.11 0.00 3019898.88 00:22:23.654 0 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96411 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96411 ']' 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96411 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96411 00:22:23.654 killing process with pid 96411 00:22:23.654 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.654 00:22:23.654 Latency(us) 00:22:23.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.654 =================================================================================================================== 00:22:23.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96411' 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96411 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96411 00:22:23.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96684 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96684 /var/tmp/bdevperf.sock 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96684 ']' 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:23.654 [2024-07-15 13:06:35.669583] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:22:23.654 [2024-07-15 13:06:35.669712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96684 ] 00:22:23.654 [2024-07-15 13:06:35.809011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.654 [2024-07-15 13:06:35.898024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96693 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96684 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:23.654 13:06:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:23.910 13:06:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:24.475 NVMe0n1 00:22:24.475 13:06:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96752 00:22:24.475 13:06:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.475 13:06:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:24.731 Running I/O for 10 seconds... 00:22:25.660 13:06:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.920 [2024-07-15 13:06:38.215486] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215541] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215554] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215563] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215571] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215579] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215587] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215596] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215604] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215612] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215621] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215629] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215637] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215645] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215653] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215661] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215669] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215677] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215685] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215693] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215701] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215709] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215717] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215725] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.215734] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8f50 is same with the state(5) to be set 00:22:25.920 [2024-07-15 13:06:38.216329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.216980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.216998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.217018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.217034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.217052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.217071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.217090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.217107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.217127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.217139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.217151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.217162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.217182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.217200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.217221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.217238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.217257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.217274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.920 [2024-07-15 13:06:38.217294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.920 [2024-07-15 13:06:38.217311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.217976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.217994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.921 [2024-07-15 13:06:38.218528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.921 [2024-07-15 13:06:38.218537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.218976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.218990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.922 [2024-07-15 13:06:38.219731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.922 [2024-07-15 13:06:38.219743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.219753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.219778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.219797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.219816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.219834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.219854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.219865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.219877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.219887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.219899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.219908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.219920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.219929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.219941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.219954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.219973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.219990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.220010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.220021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.220032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.220042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.220054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.923 [2024-07-15 13:06:38.220064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.220106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.923 [2024-07-15 13:06:38.220125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11304 len:8 PRP1 0x0 PRP2 0x0 00:22:25.923 [2024-07-15 13:06:38.220142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.220165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.923 [2024-07-15 13:06:38.220181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.923 [2024-07-15 13:06:38.220193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23312 len:8 PRP1 0x0 PRP2 0x0 00:22:25.923 [2024-07-15 13:06:38.220203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.220213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.923 [2024-07-15 13:06:38.220221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.923 [2024-07-15 13:06:38.220229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110576 len:8 PRP1 0x0 PRP2 0x0 00:22:25.923 [2024-07-15 13:06:38.220240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.220256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.923 [2024-07-15 13:06:38.220269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.923 [2024-07-15 13:06:38.220283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73784 len:8 PRP1 0x0 PRP2 0x0 00:22:25.923 [2024-07-15 13:06:38.220295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.923 [2024-07-15 13:06:38.220342] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b298d0 was disconnected and freed. reset controller. 00:22:25.923 [2024-07-15 13:06:38.220655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.923 [2024-07-15 13:06:38.220792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abc240 (9): Bad file descriptor 00:22:25.923 [2024-07-15 13:06:38.220935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.923 [2024-07-15 13:06:38.220966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abc240 with addr=10.0.0.2, port=4420 00:22:25.923 [2024-07-15 13:06:38.220979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abc240 is same with the state(5) to be set 00:22:25.923 [2024-07-15 13:06:38.220999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abc240 (9): Bad file descriptor 00:22:25.923 [2024-07-15 13:06:38.221015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:25.923 [2024-07-15 13:06:38.221025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:25.923 [2024-07-15 13:06:38.221038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.923 [2024-07-15 13:06:38.221069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.923 [2024-07-15 13:06:38.221086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.923 13:06:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96752 00:22:27.816 [2024-07-15 13:06:40.221320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.816 [2024-07-15 13:06:40.221415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abc240 with addr=10.0.0.2, port=4420 00:22:27.816 [2024-07-15 13:06:40.221442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abc240 is same with the state(5) to be set 00:22:27.816 [2024-07-15 13:06:40.221483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abc240 (9): Bad file descriptor 00:22:27.816 [2024-07-15 13:06:40.221532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.816 [2024-07-15 13:06:40.221553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:27.816 [2024-07-15 13:06:40.221571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.816 [2024-07-15 13:06:40.221613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:27.816 [2024-07-15 13:06:40.221633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.346 [2024-07-15 13:06:42.221855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.346 [2024-07-15 13:06:42.221932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abc240 with addr=10.0.0.2, port=4420 00:22:30.346 [2024-07-15 13:06:42.221951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abc240 is same with the state(5) to be set 00:22:30.346 [2024-07-15 13:06:42.221981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abc240 (9): Bad file descriptor 00:22:30.346 [2024-07-15 13:06:42.222003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.346 [2024-07-15 13:06:42.222013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:30.346 [2024-07-15 13:06:42.222024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.346 [2024-07-15 13:06:42.222054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.346 [2024-07-15 13:06:42.222076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:32.244 [2024-07-15 13:06:44.222145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.244 [2024-07-15 13:06:44.222218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:32.244 [2024-07-15 13:06:44.222232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:32.244 [2024-07-15 13:06:44.222242] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:32.244 [2024-07-15 13:06:44.222271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:32.842 00:22:32.842 Latency(us) 00:22:32.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.842 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:32.842 NVMe0n1 : 8.22 2496.06 9.75 15.57 0.00 50909.63 2621.44 7015926.69 00:22:32.842 =================================================================================================================== 00:22:32.842 Total : 2496.06 9.75 15.57 0.00 50909.63 2621.44 7015926.69 00:22:32.842 0 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.842 Attaching 5 probes... 00:22:32.842 1627.897461: reset bdev controller NVMe0 00:22:32.842 1628.101855: reconnect bdev controller NVMe0 00:22:32.842 3628.390351: reconnect delay bdev controller NVMe0 00:22:32.842 3628.424147: reconnect bdev controller NVMe0 00:22:32.842 5628.928734: reconnect delay bdev controller NVMe0 00:22:32.842 5628.957136: reconnect bdev controller NVMe0 00:22:32.842 7629.360530: reconnect delay bdev controller NVMe0 00:22:32.842 7629.386870: reconnect bdev controller NVMe0 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96693 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96684 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96684 ']' 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96684 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96684 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:32.842 killing process with pid 96684 00:22:32.842 Received shutdown signal, test time was about 8.282364 seconds 00:22:32.842 00:22:32.842 Latency(us) 00:22:32.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.842 =================================================================================================================== 00:22:32.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96684' 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96684 00:22:32.842 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96684 00:22:33.108 13:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # nvmfcleanup 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.365 rmmod nvme_tcp 00:22:33.365 rmmod nvme_fabrics 00:22:33.365 rmmod nvme_keyring 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@493 -- # '[' -n 96103 ']' 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@494 -- # killprocess 96103 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96103 ']' 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96103 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.365 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96103 00:22:33.623 killing process with pid 96103 00:22:33.623 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:33.623 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:33.623 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96103' 00:22:33.623 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96103 00:22:33.623 13:06:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96103 00:22:33.623 13:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:22:33.623 13:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:22:33.623 13:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:22:33.623 13:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.623 13:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@282 -- # remove_spdk_ns 00:22:33.623 13:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.623 13:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.623 13:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.623 13:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:22:33.623 ************************************ 00:22:33.623 END TEST nvmf_timeout 00:22:33.623 ************************************ 00:22:33.623 00:22:33.624 real 0m48.372s 00:22:33.624 user 2m24.261s 00:22:33.624 sys 0m4.976s 00:22:33.624 13:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.624 13:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:33.882 13:06:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:33.882 13:06:46 nvmf_tcp -- nvmf/nvmf.sh@125 -- # [[ virt == phy ]] 00:22:33.882 13:06:46 nvmf_tcp -- nvmf/nvmf.sh@130 -- # timing_exit host 00:22:33.882 13:06:46 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.882 13:06:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.882 13:06:46 nvmf_tcp -- nvmf/nvmf.sh@132 -- # trap - SIGINT SIGTERM EXIT 00:22:33.882 00:22:33.882 real 15m52.661s 00:22:33.882 user 42m51.559s 00:22:33.882 sys 3m19.374s 00:22:33.882 ************************************ 00:22:33.882 END TEST nvmf_tcp 00:22:33.882 ************************************ 00:22:33.882 13:06:46 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.882 13:06:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.882 13:06:46 -- common/autotest_common.sh@1142 -- # return 0 00:22:33.882 13:06:46 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:22:33.882 13:06:46 -- spdk/autotest.sh@289 -- # export TEST_INTERRUPT_MODE=1 00:22:33.882 13:06:46 -- spdk/autotest.sh@289 -- # TEST_INTERRUPT_MODE=1 00:22:33.882 13:06:46 -- spdk/autotest.sh@290 -- # run_test nvmf_tcp_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:22:33.882 13:06:46 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:33.882 13:06:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.882 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:22:33.882 ************************************ 00:22:33.882 START TEST nvmf_tcp_interrupt_mode 00:22:33.882 ************************************ 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:22:33.882 * Looking for test storage... 00:22:33.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@10 -- # uname -s 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.882 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@20 -- # timing_enter target 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:33.883 ************************************ 00:22:33.883 START TEST nvmf_example 00:22:33.883 ************************************ 00:22:33.883 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:22:33.883 * Looking for test storage... 00:22:33.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.143 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- paths/export.sh@5 -- # export PATH 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- target/nvmf_example.sh@11 -- # '[' 1 -eq 1 ']' 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- target/nvmf_example.sh@12 -- # basename /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh 00:22:34.144 skipping nvmf_example.sh test in the interrupt mode 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- target/nvmf_example.sh@12 -- # echo 'skipping nvmf_example.sh test in the interrupt mode' 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- target/nvmf_example.sh@13 -- # exit 0 00:22:34.144 00:22:34.144 real 0m0.098s 00:22:34.144 user 0m0.050s 00:22:34.144 sys 0m0.055s 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:34.144 ************************************ 00:22:34.144 END TEST nvmf_example 00:22:34.144 ************************************ 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:34.144 ************************************ 00:22:34.144 START TEST nvmf_filesystem 00:22:34.144 ************************************ 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:22:34.144 * Looking for test storage... 00:22:34.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:22:34.144 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:22:34.145 #define SPDK_CONFIG_H 00:22:34.145 #define SPDK_CONFIG_APPS 1 00:22:34.145 #define SPDK_CONFIG_ARCH native 00:22:34.145 #undef SPDK_CONFIG_ASAN 00:22:34.145 #define SPDK_CONFIG_AVAHI 1 00:22:34.145 #undef SPDK_CONFIG_CET 00:22:34.145 #define SPDK_CONFIG_COVERAGE 1 00:22:34.145 #define SPDK_CONFIG_CROSS_PREFIX 00:22:34.145 #undef SPDK_CONFIG_CRYPTO 00:22:34.145 #undef SPDK_CONFIG_CRYPTO_MLX5 00:22:34.145 #undef SPDK_CONFIG_CUSTOMOCF 00:22:34.145 #undef SPDK_CONFIG_DAOS 00:22:34.145 #define SPDK_CONFIG_DAOS_DIR 00:22:34.145 #define SPDK_CONFIG_DEBUG 1 00:22:34.145 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:22:34.145 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:34.145 #define SPDK_CONFIG_DPDK_INC_DIR 00:22:34.145 #define SPDK_CONFIG_DPDK_LIB_DIR 00:22:34.145 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:22:34.145 #undef SPDK_CONFIG_DPDK_UADK 00:22:34.145 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:34.145 #define SPDK_CONFIG_EXAMPLES 1 00:22:34.145 #undef SPDK_CONFIG_FC 00:22:34.145 #define SPDK_CONFIG_FC_PATH 00:22:34.145 #define SPDK_CONFIG_FIO_PLUGIN 1 00:22:34.145 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:22:34.145 #undef SPDK_CONFIG_FUSE 00:22:34.145 #undef SPDK_CONFIG_FUZZER 00:22:34.145 #define SPDK_CONFIG_FUZZER_LIB 00:22:34.145 #define SPDK_CONFIG_GOLANG 1 00:22:34.145 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:22:34.145 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:22:34.145 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:22:34.145 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:22:34.145 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:22:34.145 #undef SPDK_CONFIG_HAVE_LIBBSD 00:22:34.145 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:22:34.145 #define SPDK_CONFIG_IDXD 1 00:22:34.145 #define SPDK_CONFIG_IDXD_KERNEL 1 00:22:34.145 #undef SPDK_CONFIG_IPSEC_MB 00:22:34.145 #define SPDK_CONFIG_IPSEC_MB_DIR 00:22:34.145 #define SPDK_CONFIG_ISAL 1 00:22:34.145 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:22:34.145 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:22:34.145 #define SPDK_CONFIG_LIBDIR 00:22:34.145 #undef SPDK_CONFIG_LTO 00:22:34.145 #define SPDK_CONFIG_MAX_LCORES 128 00:22:34.145 #define SPDK_CONFIG_NVME_CUSE 1 00:22:34.145 #undef SPDK_CONFIG_OCF 00:22:34.145 #define SPDK_CONFIG_OCF_PATH 00:22:34.145 #define SPDK_CONFIG_OPENSSL_PATH 00:22:34.145 #undef SPDK_CONFIG_PGO_CAPTURE 00:22:34.145 #define SPDK_CONFIG_PGO_DIR 00:22:34.145 #undef SPDK_CONFIG_PGO_USE 00:22:34.145 #define SPDK_CONFIG_PREFIX /usr/local 00:22:34.145 #undef SPDK_CONFIG_RAID5F 00:22:34.145 #undef SPDK_CONFIG_RBD 00:22:34.145 #define SPDK_CONFIG_RDMA 1 00:22:34.145 #define SPDK_CONFIG_RDMA_PROV verbs 00:22:34.145 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:22:34.145 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:22:34.145 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:22:34.145 #define SPDK_CONFIG_SHARED 1 00:22:34.145 #undef SPDK_CONFIG_SMA 00:22:34.145 #define SPDK_CONFIG_TESTS 1 00:22:34.145 #undef SPDK_CONFIG_TSAN 00:22:34.145 #define SPDK_CONFIG_UBLK 1 00:22:34.145 #define SPDK_CONFIG_UBSAN 1 00:22:34.145 #undef SPDK_CONFIG_UNIT_TESTS 00:22:34.145 #undef SPDK_CONFIG_URING 00:22:34.145 #define SPDK_CONFIG_URING_PATH 00:22:34.145 #undef SPDK_CONFIG_URING_ZNS 00:22:34.145 #define SPDK_CONFIG_USDT 1 00:22:34.145 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:22:34.145 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:22:34.145 #undef SPDK_CONFIG_VFIO_USER 00:22:34.145 #define SPDK_CONFIG_VFIO_USER_DIR 00:22:34.145 #define SPDK_CONFIG_VHOST 1 00:22:34.145 #define SPDK_CONFIG_VIRTIO 1 00:22:34.145 #undef SPDK_CONFIG_VTUNE 00:22:34.145 #define SPDK_CONFIG_VTUNE_DIR 00:22:34.145 #define SPDK_CONFIG_WERROR 1 00:22:34.145 #define SPDK_CONFIG_WPDK_DIR 00:22:34.145 #undef SPDK_CONFIG_XNVME 00:22:34.145 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:22:34.145 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@68 -- # uname -s 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:22:34.146 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 97031 ]] 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 97031 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.QNcad5 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.QNcad5/tests/target /tmp/spdk.QNcad5 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:22:34.147 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6260076544 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267883520 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7806976 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2487025664 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=20131840 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13769502720 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5260218368 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267731968 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=155648 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13769502720 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5260218368 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_4/fedora38-libvirt/output 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=95217897472 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4484882432 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:22:34.148 * Looking for test storage... 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:34.148 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13769502720 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:34.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:22:34.406 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@452 -- # prepare_net_devs 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@414 -- # local -g is_hw=no 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@416 -- # remove_spdk_ns 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@436 -- # nvmf_veth_init 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:22:34.407 Cannot find device "nvmf_tgt_br" 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:22:34.407 Cannot find device "nvmf_tgt_br2" 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@160 -- # true 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:22:34.407 Cannot find device "nvmf_tgt_br" 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:22:34.407 Cannot find device "nvmf_tgt_br2" 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:34.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:34.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:34.407 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:22:34.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:22:34.664 00:22:34.664 --- 10.0.0.2 ping statistics --- 00:22:34.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.664 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:34.664 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:22:34.665 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:34.665 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:22:34.665 00:22:34.665 --- 10.0.0.3 ping statistics --- 00:22:34.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.665 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:34.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:22:34.665 00:22:34.665 --- 10.0.0.1 ping statistics --- 00:22:34.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.665 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@437 -- # return 0 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:22:34.665 13:06:46 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:22:34.665 ************************************ 00:22:34.665 START TEST nvmf_filesystem_no_in_capsule 00:22:34.665 ************************************ 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@485 -- # nvmfpid=97195 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@486 -- # waitforlisten 97195 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 97195 ']' 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.665 13:06:47 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:34.665 [2024-07-15 13:06:47.105266] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:34.665 [2024-07-15 13:06:47.107127] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:22:34.665 [2024-07-15 13:06:47.107226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.923 [2024-07-15 13:06:47.251519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.923 [2024-07-15 13:06:47.313876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.923 [2024-07-15 13:06:47.313937] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.923 [2024-07-15 13:06:47.313954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.923 [2024-07-15 13:06:47.313967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.923 [2024-07-15 13:06:47.313979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.923 [2024-07-15 13:06:47.314099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.923 [2024-07-15 13:06:47.314191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.923 [2024-07-15 13:06:47.314620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.923 [2024-07-15 13:06:47.314633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.923 [2024-07-15 13:06:47.376877] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:34.923 [2024-07-15 13:06:47.376942] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:34.923 [2024-07-15 13:06:47.377063] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:34.923 [2024-07-15 13:06:47.377101] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:34.923 [2024-07-15 13:06:47.377478] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:35.857 [2024-07-15 13:06:48.099557] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:35.857 Malloc1 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:35.857 [2024-07-15 13:06:48.235726] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:35.857 { 00:22:35.857 "aliases": [ 00:22:35.857 "d0e78896-59c8-4d2f-a7d9-a813d5f1ff55" 00:22:35.857 ], 00:22:35.857 "assigned_rate_limits": { 00:22:35.857 "r_mbytes_per_sec": 0, 00:22:35.857 "rw_ios_per_sec": 0, 00:22:35.857 "rw_mbytes_per_sec": 0, 00:22:35.857 "w_mbytes_per_sec": 0 00:22:35.857 }, 00:22:35.857 "block_size": 512, 00:22:35.857 "claim_type": "exclusive_write", 00:22:35.857 "claimed": true, 00:22:35.857 "driver_specific": {}, 00:22:35.857 "memory_domains": [ 00:22:35.857 { 00:22:35.857 "dma_device_id": "system", 00:22:35.857 "dma_device_type": 1 00:22:35.857 }, 00:22:35.857 { 00:22:35.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.857 "dma_device_type": 2 00:22:35.857 } 00:22:35.857 ], 00:22:35.857 "name": "Malloc1", 00:22:35.857 "num_blocks": 1048576, 00:22:35.857 "product_name": "Malloc disk", 00:22:35.857 "supported_io_types": { 00:22:35.857 "abort": true, 00:22:35.857 "compare": false, 00:22:35.857 "compare_and_write": false, 00:22:35.857 "copy": true, 00:22:35.857 "flush": true, 00:22:35.857 "get_zone_info": false, 00:22:35.857 "nvme_admin": false, 00:22:35.857 "nvme_io": false, 00:22:35.857 "nvme_io_md": false, 00:22:35.857 "nvme_iov_md": false, 00:22:35.857 "read": true, 00:22:35.857 "reset": true, 00:22:35.857 "seek_data": false, 00:22:35.857 "seek_hole": false, 00:22:35.857 "unmap": true, 00:22:35.857 "write": true, 00:22:35.857 "write_zeroes": true, 00:22:35.857 "zcopy": true, 00:22:35.857 "zone_append": false, 00:22:35.857 "zone_management": false 00:22:35.857 }, 00:22:35.857 "uuid": "d0e78896-59c8-4d2f-a7d9-a813d5f1ff55", 00:22:35.857 "zoned": false 00:22:35.857 } 00:22:35.857 ]' 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:22:35.857 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:36.124 13:06:48 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:22:38.018 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:22:38.274 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:22:38.274 13:06:50 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:39.207 ************************************ 00:22:39.207 START TEST filesystem_ext4 00:22:39.207 ************************************ 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:22:39.207 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:22:39.207 mke2fs 1.46.5 (30-Dec-2021) 00:22:39.463 Discarding device blocks: 0/522240 done 00:22:39.463 Creating filesystem with 522240 1k blocks and 130560 inodes 00:22:39.463 Filesystem UUID: 94f8bbfc-3b77-49f2-bed1-3391ba3e9dee 00:22:39.463 Superblock backups stored on blocks: 00:22:39.463 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:22:39.463 00:22:39.463 Allocating group tables: 0/64 done 00:22:39.463 Writing inode tables: 0/64 done 00:22:39.463 Creating journal (8192 blocks): done 00:22:39.463 Writing superblocks and filesystem accounting information: 0/64 done 00:22:39.463 00:22:39.463 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:22:39.463 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:39.463 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:39.463 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:22:39.721 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:39.721 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:22:39.721 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:22:39.721 13:06:51 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 97195 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:39.721 00:22:39.721 real 0m0.402s 00:22:39.721 user 0m0.017s 00:22:39.721 sys 0m0.054s 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.721 ************************************ 00:22:39.721 END TEST filesystem_ext4 00:22:39.721 ************************************ 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:39.721 ************************************ 00:22:39.721 START TEST filesystem_btrfs 00:22:39.721 ************************************ 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:22:39.721 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:22:39.721 btrfs-progs v6.6.2 00:22:39.721 See https://btrfs.readthedocs.io for more information. 00:22:39.721 00:22:39.721 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:22:39.721 NOTE: several default settings have changed in version 5.15, please make sure 00:22:39.721 this does not affect your deployments: 00:22:39.721 - DUP for metadata (-m dup) 00:22:39.721 - enabled no-holes (-O no-holes) 00:22:39.722 - enabled free-space-tree (-R free-space-tree) 00:22:39.722 00:22:39.722 Label: (null) 00:22:39.722 UUID: 729d3ae6-fd78-4ac2-8f99-5fd4712426c0 00:22:39.722 Node size: 16384 00:22:39.722 Sector size: 4096 00:22:39.722 Filesystem size: 510.00MiB 00:22:39.722 Block group profiles: 00:22:39.722 Data: single 8.00MiB 00:22:39.722 Metadata: DUP 32.00MiB 00:22:39.722 System: DUP 8.00MiB 00:22:39.722 SSD detected: yes 00:22:39.722 Zoned device: no 00:22:39.722 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:22:39.722 Runtime features: free-space-tree 00:22:39.722 Checksum: crc32c 00:22:39.722 Number of devices: 1 00:22:39.722 Devices: 00:22:39.722 ID SIZE PATH 00:22:39.722 1 510.00MiB /dev/nvme0n1p1 00:22:39.722 00:22:39.722 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:22:39.722 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 97195 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:39.979 00:22:39.979 real 0m0.201s 00:22:39.979 user 0m0.018s 00:22:39.979 sys 0m0.062s 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:22:39.979 ************************************ 00:22:39.979 END TEST filesystem_btrfs 00:22:39.979 ************************************ 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:39.979 ************************************ 00:22:39.979 START TEST filesystem_xfs 00:22:39.979 ************************************ 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:22:39.979 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:22:39.980 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:39.980 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:22:39.980 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:22:39.980 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:22:39.980 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:22:39.980 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:22:39.980 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:22:39.980 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:22:39.980 13:06:52 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:22:39.980 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:22:39.980 = sectsz=512 attr=2, projid32bit=1 00:22:39.980 = crc=1 finobt=1, sparse=1, rmapbt=0 00:22:39.980 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:22:39.980 data = bsize=4096 blocks=130560, imaxpct=25 00:22:39.980 = sunit=0 swidth=0 blks 00:22:39.980 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:22:39.980 log =internal log bsize=4096 blocks=16384, version=2 00:22:39.980 = sectsz=512 sunit=0 blks, lazy-count=1 00:22:39.980 realtime =none extsz=4096 blocks=0, rtextents=0 00:22:40.910 Discarding blocks...Done. 00:22:40.910 13:06:53 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:22:40.910 13:06:53 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:42.859 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:42.859 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:22:42.859 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:42.859 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:22:42.859 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 97195 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:42.860 00:22:42.860 real 0m2.577s 00:22:42.860 user 0m0.020s 00:22:42.860 sys 0m0.048s 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:22:42.860 ************************************ 00:22:42.860 END TEST filesystem_xfs 00:22:42.860 ************************************ 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:42.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:42.860 13:06:54 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 97195 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 97195 ']' 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 97195 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97195 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:42.860 killing process with pid 97195 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97195' 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 97195 00:22:42.860 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 97195 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:22:43.119 00:22:43.119 real 0m8.306s 00:22:43.119 user 0m24.621s 00:22:43.119 sys 0m4.040s 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.119 ************************************ 00:22:43.119 END TEST nvmf_filesystem_no_in_capsule 00:22:43.119 ************************************ 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:22:43.119 ************************************ 00:22:43.119 START TEST nvmf_filesystem_in_capsule 00:22:43.119 ************************************ 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@485 -- # nvmfpid=97488 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@486 -- # waitforlisten 97488 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 97488 ']' 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:22:43.119 13:06:55 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:43.119 [2024-07-15 13:06:55.434494] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:43.119 [2024-07-15 13:06:55.435941] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:22:43.119 [2024-07-15 13:06:55.436006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.119 [2024-07-15 13:06:55.572904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.378 [2024-07-15 13:06:55.676519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.378 [2024-07-15 13:06:55.676604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.378 [2024-07-15 13:06:55.676626] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.378 [2024-07-15 13:06:55.676642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.378 [2024-07-15 13:06:55.676657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.378 [2024-07-15 13:06:55.676826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.378 [2024-07-15 13:06:55.677179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.378 [2024-07-15 13:06:55.677613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.378 [2024-07-15 13:06:55.677642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.378 [2024-07-15 13:06:55.750719] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:43.378 [2024-07-15 13:06:55.750844] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:43.378 [2024-07-15 13:06:55.750875] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:43.378 [2024-07-15 13:06:55.751171] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:22:43.378 [2024-07-15 13:06:55.751544] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:44.313 [2024-07-15 13:06:56.462599] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:44.313 Malloc1 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:44.313 [2024-07-15 13:06:56.586789] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:44.313 { 00:22:44.313 "aliases": [ 00:22:44.313 "b6cf0126-f8fd-426e-8c0b-0c98c0fd4a1c" 00:22:44.313 ], 00:22:44.313 "assigned_rate_limits": { 00:22:44.313 "r_mbytes_per_sec": 0, 00:22:44.313 "rw_ios_per_sec": 0, 00:22:44.313 "rw_mbytes_per_sec": 0, 00:22:44.313 "w_mbytes_per_sec": 0 00:22:44.313 }, 00:22:44.313 "block_size": 512, 00:22:44.313 "claim_type": "exclusive_write", 00:22:44.313 "claimed": true, 00:22:44.313 "driver_specific": {}, 00:22:44.313 "memory_domains": [ 00:22:44.313 { 00:22:44.313 "dma_device_id": "system", 00:22:44.313 "dma_device_type": 1 00:22:44.313 }, 00:22:44.313 { 00:22:44.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.313 "dma_device_type": 2 00:22:44.313 } 00:22:44.313 ], 00:22:44.313 "name": "Malloc1", 00:22:44.313 "num_blocks": 1048576, 00:22:44.313 "product_name": "Malloc disk", 00:22:44.313 "supported_io_types": { 00:22:44.313 "abort": true, 00:22:44.313 "compare": false, 00:22:44.313 "compare_and_write": false, 00:22:44.313 "copy": true, 00:22:44.313 "flush": true, 00:22:44.313 "get_zone_info": false, 00:22:44.313 "nvme_admin": false, 00:22:44.313 "nvme_io": false, 00:22:44.313 "nvme_io_md": false, 00:22:44.313 "nvme_iov_md": false, 00:22:44.313 "read": true, 00:22:44.313 "reset": true, 00:22:44.313 "seek_data": false, 00:22:44.313 "seek_hole": false, 00:22:44.313 "unmap": true, 00:22:44.313 "write": true, 00:22:44.313 "write_zeroes": true, 00:22:44.313 "zcopy": true, 00:22:44.313 "zone_append": false, 00:22:44.313 "zone_management": false 00:22:44.313 }, 00:22:44.313 "uuid": "b6cf0126-f8fd-426e-8c0b-0c98c0fd4a1c", 00:22:44.313 "zoned": false 00:22:44.313 } 00:22:44.313 ]' 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:22:44.313 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:22:44.314 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:22:44.314 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:44.314 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:22:44.314 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:22:44.314 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:44.314 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:44.314 13:06:56 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:22:46.841 13:06:58 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:47.774 ************************************ 00:22:47.774 START TEST filesystem_in_capsule_ext4 00:22:47.774 ************************************ 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:22:47.774 13:06:59 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:22:47.774 mke2fs 1.46.5 (30-Dec-2021) 00:22:47.774 Discarding device blocks: 0/522240 done 00:22:47.774 Creating filesystem with 522240 1k blocks and 130560 inodes 00:22:47.774 Filesystem UUID: 6ec78132-2469-4d58-bb41-5f2b6bcb18f2 00:22:47.774 Superblock backups stored on blocks: 00:22:47.774 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:22:47.774 00:22:47.774 Allocating group tables: 0/64 done 00:22:47.774 Writing inode tables: 0/64 done 00:22:47.774 Creating journal (8192 blocks): done 00:22:47.774 Writing superblocks and filesystem accounting information: 0/64 done 00:22:47.774 00:22:47.774 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:22:47.774 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:47.774 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 97488 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:48.032 00:22:48.032 real 0m0.346s 00:22:48.032 user 0m0.020s 00:22:48.032 sys 0m0.046s 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:48.032 ************************************ 00:22:48.032 END TEST filesystem_in_capsule_ext4 00:22:48.032 ************************************ 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:48.032 ************************************ 00:22:48.032 START TEST filesystem_in_capsule_btrfs 00:22:48.032 ************************************ 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:22:48.032 btrfs-progs v6.6.2 00:22:48.032 See https://btrfs.readthedocs.io for more information. 00:22:48.032 00:22:48.032 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:22:48.032 NOTE: several default settings have changed in version 5.15, please make sure 00:22:48.032 this does not affect your deployments: 00:22:48.032 - DUP for metadata (-m dup) 00:22:48.032 - enabled no-holes (-O no-holes) 00:22:48.032 - enabled free-space-tree (-R free-space-tree) 00:22:48.032 00:22:48.032 Label: (null) 00:22:48.032 UUID: 1924e951-c037-4c44-9932-c6af1c3a12f6 00:22:48.032 Node size: 16384 00:22:48.032 Sector size: 4096 00:22:48.032 Filesystem size: 510.00MiB 00:22:48.032 Block group profiles: 00:22:48.032 Data: single 8.00MiB 00:22:48.032 Metadata: DUP 32.00MiB 00:22:48.032 System: DUP 8.00MiB 00:22:48.032 SSD detected: yes 00:22:48.032 Zoned device: no 00:22:48.032 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:22:48.032 Runtime features: free-space-tree 00:22:48.032 Checksum: crc32c 00:22:48.032 Number of devices: 1 00:22:48.032 Devices: 00:22:48.032 ID SIZE PATH 00:22:48.032 1 510.00MiB /dev/nvme0n1p1 00:22:48.032 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:22:48.032 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 97488 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:48.291 00:22:48.291 real 0m0.188s 00:22:48.291 user 0m0.021s 00:22:48.291 sys 0m0.056s 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:22:48.291 ************************************ 00:22:48.291 END TEST filesystem_in_capsule_btrfs 00:22:48.291 ************************************ 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:48.291 ************************************ 00:22:48.291 START TEST filesystem_in_capsule_xfs 00:22:48.291 ************************************ 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:22:48.291 13:07:00 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:22:48.291 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:22:48.291 = sectsz=512 attr=2, projid32bit=1 00:22:48.291 = crc=1 finobt=1, sparse=1, rmapbt=0 00:22:48.291 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:22:48.291 data = bsize=4096 blocks=130560, imaxpct=25 00:22:48.291 = sunit=0 swidth=0 blks 00:22:48.291 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:22:48.291 log =internal log bsize=4096 blocks=16384, version=2 00:22:48.291 = sectsz=512 sunit=0 blks, lazy-count=1 00:22:48.291 realtime =none extsz=4096 blocks=0, rtextents=0 00:22:49.223 Discarding blocks...Done. 00:22:49.223 13:07:01 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:22:49.223 13:07:01 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 97488 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:51.201 00:22:51.201 real 0m2.607s 00:22:51.201 user 0m0.013s 00:22:51.201 sys 0m0.052s 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:22:51.201 ************************************ 00:22:51.201 END TEST filesystem_in_capsule_xfs 00:22:51.201 ************************************ 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:51.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 97488 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 97488 ']' 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 97488 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97488 00:22:51.201 killing process with pid 97488 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97488' 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 97488 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 97488 00:22:51.201 ************************************ 00:22:51.201 END TEST nvmf_filesystem_in_capsule 00:22:51.201 ************************************ 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:22:51.201 00:22:51.201 real 0m8.241s 00:22:51.201 user 0m24.464s 00:22:51.201 sys 0m4.255s 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@492 -- # nvmfcleanup 00:22:51.201 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.459 rmmod nvme_tcp 00:22:51.459 rmmod nvme_fabrics 00:22:51.459 rmmod nvme_keyring 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@282 -- # remove_spdk_ns 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:22:51.459 00:22:51.459 real 0m17.355s 00:22:51.459 user 0m49.289s 00:22:51.459 sys 0m8.672s 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:51.459 ************************************ 00:22:51.459 END TEST nvmf_filesystem 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:22:51.459 ************************************ 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:51.459 ************************************ 00:22:51.459 START TEST nvmf_target_discovery 00:22:51.459 ************************************ 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:22:51.459 * Looking for test storage... 00:22:51.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.459 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@452 -- # prepare_net_devs 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@414 -- # local -g is_hw=no 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@416 -- # remove_spdk_ns 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@436 -- # nvmf_veth_init 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:51.460 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:22:51.718 Cannot find device "nvmf_tgt_br" 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.718 Cannot find device "nvmf_tgt_br2" 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@160 -- # true 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:22:51.718 Cannot find device "nvmf_tgt_br" 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:22:51.718 Cannot find device "nvmf_tgt_br2" 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:22:51.718 13:07:03 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:51.718 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:22:51.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:22:51.976 00:22:51.976 --- 10.0.0.2 ping statistics --- 00:22:51.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.976 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:22:51.976 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:51.976 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:22:51.976 00:22:51.976 --- 10.0.0.3 ping statistics --- 00:22:51.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.976 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:51.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:22:51.976 00:22:51.976 --- 10.0.0.1 ping statistics --- 00:22:51.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.976 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@437 -- # return 0 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:22:51.976 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@485 -- # nvmfpid=97926 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@486 -- # waitforlisten 97926 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 97926 ']' 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.977 13:07:04 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.977 [2024-07-15 13:07:04.345276] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:51.977 [2024-07-15 13:07:04.346361] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:22:51.977 [2024-07-15 13:07:04.346426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.235 [2024-07-15 13:07:04.480985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.235 [2024-07-15 13:07:04.547042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.235 [2024-07-15 13:07:04.547091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.235 [2024-07-15 13:07:04.547102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.235 [2024-07-15 13:07:04.547111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.235 [2024-07-15 13:07:04.547118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.235 [2024-07-15 13:07:04.547199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.235 [2024-07-15 13:07:04.547524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.235 [2024-07-15 13:07:04.548047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.235 [2024-07-15 13:07:04.548103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.235 [2024-07-15 13:07:04.603084] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:52.235 [2024-07-15 13:07:04.603642] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:52.235 [2024-07-15 13:07:04.603841] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:52.235 [2024-07-15 13:07:04.604100] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:22:52.235 [2024-07-15 13:07:04.604232] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 [2024-07-15 13:07:05.381243] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 Null1 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 [2024-07-15 13:07:05.433207] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 Null2 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.169 Null3 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.169 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 Null4 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 4420 00:22:53.170 00:22:53.170 Discovery Log Number of Records 6, Generation counter 6 00:22:53.170 =====Discovery Log Entry 0====== 00:22:53.170 trtype: tcp 00:22:53.170 adrfam: ipv4 00:22:53.170 subtype: current discovery subsystem 00:22:53.170 treq: not required 00:22:53.170 portid: 0 00:22:53.170 trsvcid: 4420 00:22:53.170 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:53.170 traddr: 10.0.0.2 00:22:53.170 eflags: explicit discovery connections, duplicate discovery information 00:22:53.170 sectype: none 00:22:53.170 =====Discovery Log Entry 1====== 00:22:53.170 trtype: tcp 00:22:53.170 adrfam: ipv4 00:22:53.170 subtype: nvme subsystem 00:22:53.170 treq: not required 00:22:53.170 portid: 0 00:22:53.170 trsvcid: 4420 00:22:53.170 subnqn: nqn.2016-06.io.spdk:cnode1 00:22:53.170 traddr: 10.0.0.2 00:22:53.170 eflags: none 00:22:53.170 sectype: none 00:22:53.170 =====Discovery Log Entry 2====== 00:22:53.170 trtype: tcp 00:22:53.170 adrfam: ipv4 00:22:53.170 subtype: nvme subsystem 00:22:53.170 treq: not required 00:22:53.170 portid: 0 00:22:53.170 trsvcid: 4420 00:22:53.170 subnqn: nqn.2016-06.io.spdk:cnode2 00:22:53.170 traddr: 10.0.0.2 00:22:53.170 eflags: none 00:22:53.170 sectype: none 00:22:53.170 =====Discovery Log Entry 3====== 00:22:53.170 trtype: tcp 00:22:53.170 adrfam: ipv4 00:22:53.170 subtype: nvme subsystem 00:22:53.170 treq: not required 00:22:53.170 portid: 0 00:22:53.170 trsvcid: 4420 00:22:53.170 subnqn: nqn.2016-06.io.spdk:cnode3 00:22:53.170 traddr: 10.0.0.2 00:22:53.170 eflags: none 00:22:53.170 sectype: none 00:22:53.170 =====Discovery Log Entry 4====== 00:22:53.170 trtype: tcp 00:22:53.170 adrfam: ipv4 00:22:53.170 subtype: nvme subsystem 00:22:53.170 treq: not required 00:22:53.170 portid: 0 00:22:53.170 trsvcid: 4420 00:22:53.170 subnqn: nqn.2016-06.io.spdk:cnode4 00:22:53.170 traddr: 10.0.0.2 00:22:53.170 eflags: none 00:22:53.170 sectype: none 00:22:53.170 =====Discovery Log Entry 5====== 00:22:53.170 trtype: tcp 00:22:53.170 adrfam: ipv4 00:22:53.170 subtype: discovery subsystem referral 00:22:53.170 treq: not required 00:22:53.170 portid: 0 00:22:53.170 trsvcid: 4430 00:22:53.170 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:53.170 traddr: 10.0.0.2 00:22:53.170 eflags: none 00:22:53.170 sectype: none 00:22:53.170 Perform nvmf subsystem discovery via RPC 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.170 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.170 [ 00:22:53.170 { 00:22:53.170 "allow_any_host": true, 00:22:53.170 "hosts": [], 00:22:53.170 "listen_addresses": [ 00:22:53.170 { 00:22:53.170 "adrfam": "IPv4", 00:22:53.170 "traddr": "10.0.0.2", 00:22:53.170 "trsvcid": "4420", 00:22:53.170 "trtype": "TCP" 00:22:53.170 } 00:22:53.170 ], 00:22:53.170 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:53.170 "subtype": "Discovery" 00:22:53.170 }, 00:22:53.170 { 00:22:53.170 "allow_any_host": true, 00:22:53.170 "hosts": [], 00:22:53.170 "listen_addresses": [ 00:22:53.170 { 00:22:53.170 "adrfam": "IPv4", 00:22:53.170 "traddr": "10.0.0.2", 00:22:53.170 "trsvcid": "4420", 00:22:53.170 "trtype": "TCP" 00:22:53.170 } 00:22:53.170 ], 00:22:53.170 "max_cntlid": 65519, 00:22:53.170 "max_namespaces": 32, 00:22:53.170 "min_cntlid": 1, 00:22:53.170 "model_number": "SPDK bdev Controller", 00:22:53.170 "namespaces": [ 00:22:53.170 { 00:22:53.170 "bdev_name": "Null1", 00:22:53.170 "name": "Null1", 00:22:53.170 "nguid": "F457A082FF5B41C595470DD30D04D137", 00:22:53.170 "nsid": 1, 00:22:53.170 "uuid": "f457a082-ff5b-41c5-9547-0dd30d04d137" 00:22:53.170 } 00:22:53.170 ], 00:22:53.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.170 "serial_number": "SPDK00000000000001", 00:22:53.170 "subtype": "NVMe" 00:22:53.170 }, 00:22:53.170 { 00:22:53.170 "allow_any_host": true, 00:22:53.170 "hosts": [], 00:22:53.170 "listen_addresses": [ 00:22:53.170 { 00:22:53.170 "adrfam": "IPv4", 00:22:53.170 "traddr": "10.0.0.2", 00:22:53.170 "trsvcid": "4420", 00:22:53.170 "trtype": "TCP" 00:22:53.170 } 00:22:53.170 ], 00:22:53.170 "max_cntlid": 65519, 00:22:53.170 "max_namespaces": 32, 00:22:53.170 "min_cntlid": 1, 00:22:53.170 "model_number": "SPDK bdev Controller", 00:22:53.170 "namespaces": [ 00:22:53.170 { 00:22:53.170 "bdev_name": "Null2", 00:22:53.170 "name": "Null2", 00:22:53.170 "nguid": "FE6957D89E6145C695F1E876E72A114F", 00:22:53.170 "nsid": 1, 00:22:53.170 "uuid": "fe6957d8-9e61-45c6-95f1-e876e72a114f" 00:22:53.170 } 00:22:53.170 ], 00:22:53.170 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:22:53.170 "serial_number": "SPDK00000000000002", 00:22:53.170 "subtype": "NVMe" 00:22:53.170 }, 00:22:53.170 { 00:22:53.170 "allow_any_host": true, 00:22:53.170 "hosts": [], 00:22:53.171 "listen_addresses": [ 00:22:53.171 { 00:22:53.171 "adrfam": "IPv4", 00:22:53.171 "traddr": "10.0.0.2", 00:22:53.171 "trsvcid": "4420", 00:22:53.171 "trtype": "TCP" 00:22:53.171 } 00:22:53.171 ], 00:22:53.171 "max_cntlid": 65519, 00:22:53.171 "max_namespaces": 32, 00:22:53.171 "min_cntlid": 1, 00:22:53.171 "model_number": "SPDK bdev Controller", 00:22:53.171 "namespaces": [ 00:22:53.171 { 00:22:53.171 "bdev_name": "Null3", 00:22:53.171 "name": "Null3", 00:22:53.171 "nguid": "C5C6346D8C4A4F55AEE76B394D7B0FC1", 00:22:53.171 "nsid": 1, 00:22:53.171 "uuid": "c5c6346d-8c4a-4f55-aee7-6b394d7b0fc1" 00:22:53.171 } 00:22:53.171 ], 00:22:53.171 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:22:53.171 "serial_number": "SPDK00000000000003", 00:22:53.171 "subtype": "NVMe" 00:22:53.171 }, 00:22:53.171 { 00:22:53.171 "allow_any_host": true, 00:22:53.171 "hosts": [], 00:22:53.171 "listen_addresses": [ 00:22:53.171 { 00:22:53.171 "adrfam": "IPv4", 00:22:53.440 "traddr": "10.0.0.2", 00:22:53.440 "trsvcid": "4420", 00:22:53.440 "trtype": "TCP" 00:22:53.440 } 00:22:53.440 ], 00:22:53.440 "max_cntlid": 65519, 00:22:53.440 "max_namespaces": 32, 00:22:53.440 "min_cntlid": 1, 00:22:53.440 "model_number": "SPDK bdev Controller", 00:22:53.440 "namespaces": [ 00:22:53.440 { 00:22:53.440 "bdev_name": "Null4", 00:22:53.440 "name": "Null4", 00:22:53.440 "nguid": "8F24C064B89C483E90455CA3796C7CA0", 00:22:53.440 "nsid": 1, 00:22:53.440 "uuid": "8f24c064-b89c-483e-9045-5ca3796c7ca0" 00:22:53.440 } 00:22:53.440 ], 00:22:53.440 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:22:53.440 "serial_number": "SPDK00000000000004", 00:22:53.440 "subtype": "NVMe" 00:22:53.440 } 00:22:53.440 ] 00:22:53.440 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.440 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@492 -- # nvmfcleanup 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.441 rmmod nvme_tcp 00:22:53.441 rmmod nvme_fabrics 00:22:53.441 rmmod nvme_keyring 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@493 -- # '[' -n 97926 ']' 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@494 -- # killprocess 97926 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 97926 ']' 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 97926 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97926 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:53.441 killing process with pid 97926 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97926' 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 97926 00:22:53.441 13:07:05 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 97926 00:22:53.698 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:22:53.698 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:22:53.698 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:22:53.698 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.698 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@282 -- # remove_spdk_ns 00:22:53.698 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.698 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.698 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.698 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:22:53.698 00:22:53.698 real 0m2.276s 00:22:53.698 user 0m1.907s 00:22:53.698 sys 0m0.689s 00:22:53.699 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.699 ************************************ 00:22:53.699 END TEST nvmf_target_discovery 00:22:53.699 ************************************ 00:22:53.699 13:07:06 nvmf_tcp_interrupt_mode.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.699 13:07:06 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:22:53.699 13:07:06 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:22:53.699 13:07:06 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.699 13:07:06 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.699 13:07:06 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:53.699 ************************************ 00:22:53.699 START TEST nvmf_referrals 00:22:53.699 ************************************ 00:22:53.699 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:22:53.956 * Looking for test storage... 00:22:53.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@452 -- # prepare_net_devs 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@414 -- # local -g is_hw=no 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@416 -- # remove_spdk_ns 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@436 -- # nvmf_veth_init 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:22:53.957 Cannot find device "nvmf_tgt_br" 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:22:53.957 Cannot find device "nvmf_tgt_br2" 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@160 -- # true 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:22:53.957 Cannot find device "nvmf_tgt_br" 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:22:53.957 Cannot find device "nvmf_tgt_br2" 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:53.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:53.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:53.957 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:53.958 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:53.958 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:53.958 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:53.958 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:22:54.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:22:54.215 00:22:54.215 --- 10.0.0.2 ping statistics --- 00:22:54.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.215 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:54.215 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:22:54.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:54.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:22:54.215 00:22:54.215 --- 10.0.0.3 ping statistics --- 00:22:54.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.215 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:54.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:54.216 00:22:54.216 --- 10.0.0.1 ping statistics --- 00:22:54.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.216 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@437 -- # return 0 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@485 -- # nvmfpid=98149 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@486 -- # waitforlisten 98149 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 98149 ']' 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.216 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.216 [2024-07-15 13:07:06.631982] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:54.216 [2024-07-15 13:07:06.633057] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:22:54.216 [2024-07-15 13:07:06.633627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.474 [2024-07-15 13:07:06.773301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.474 [2024-07-15 13:07:06.842987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.474 [2024-07-15 13:07:06.843049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.474 [2024-07-15 13:07:06.843064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.474 [2024-07-15 13:07:06.843074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.474 [2024-07-15 13:07:06.843084] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.474 [2024-07-15 13:07:06.843458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.474 [2024-07-15 13:07:06.843512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.475 [2024-07-15 13:07:06.844182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.475 [2024-07-15 13:07:06.844237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.475 [2024-07-15 13:07:06.907251] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:54.475 [2024-07-15 13:07:06.907491] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:54.475 [2024-07-15 13:07:06.908572] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:54.475 [2024-07-15 13:07:06.909444] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:22:54.475 [2024-07-15 13:07:06.909575] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:54.475 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.475 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:22:54.475 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:22:54.475 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:54.475 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.733 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.733 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.733 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.733 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.733 [2024-07-15 13:07:06.981151] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.733 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.733 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:22:54.733 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.733 13:07:06 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.733 [2024-07-15 13:07:06.997387] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:54.733 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:22:54.991 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.284 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:22:55.284 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:22:55.284 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:22:55.284 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:55.284 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:22:55.285 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:55.543 13:07:07 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@492 -- # nvmfcleanup 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:55.801 rmmod nvme_tcp 00:22:55.801 rmmod nvme_fabrics 00:22:55.801 rmmod nvme_keyring 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@493 -- # '[' -n 98149 ']' 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@494 -- # killprocess 98149 00:22:55.801 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 98149 ']' 00:22:55.802 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 98149 00:22:55.802 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:22:55.802 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.802 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98149 00:22:55.802 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:55.802 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:55.802 killing process with pid 98149 00:22:55.802 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98149' 00:22:55.802 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 98149 00:22:55.802 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 98149 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@282 -- # remove_spdk_ns 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:22:56.080 ************************************ 00:22:56.080 END TEST nvmf_referrals 00:22:56.080 ************************************ 00:22:56.080 00:22:56.080 real 0m2.313s 00:22:56.080 user 0m2.024s 00:22:56.080 sys 0m0.849s 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:56.080 ************************************ 00:22:56.080 START TEST nvmf_connect_disconnect 00:22:56.080 ************************************ 00:22:56.080 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:22:56.339 * Looking for test storage... 00:22:56.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:56.339 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@452 -- # prepare_net_devs 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # local -g is_hw=no 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # remove_spdk_ns 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # nvmf_veth_init 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:22:56.340 Cannot find device "nvmf_tgt_br" 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:22:56.340 Cannot find device "nvmf_tgt_br2" 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # true 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:22:56.340 Cannot find device "nvmf_tgt_br" 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:22:56.340 Cannot find device "nvmf_tgt_br2" 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:56.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:56.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:56.340 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:22:56.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:22:56.598 00:22:56.598 --- 10.0.0.2 ping statistics --- 00:22:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.598 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:22:56.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:56.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:56.598 00:22:56.598 --- 10.0.0.3 ping statistics --- 00:22:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.598 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:56.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:22:56.598 00:22:56.598 --- 10.0.0.1 ping statistics --- 00:22:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.598 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@437 -- # return 0 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:22:56.598 13:07:08 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:22:56.598 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:22:56.598 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:22:56.598 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:56.598 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:56.598 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # nvmfpid=98441 00:22:56.599 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:22:56.599 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # waitforlisten 98441 00:22:56.599 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 98441 ']' 00:22:56.599 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.599 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.599 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.599 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.599 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:56.856 [2024-07-15 13:07:09.074369] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:56.856 [2024-07-15 13:07:09.075671] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:22:56.856 [2024-07-15 13:07:09.075784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.856 [2024-07-15 13:07:09.211744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.856 [2024-07-15 13:07:09.271195] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.856 [2024-07-15 13:07:09.271253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.856 [2024-07-15 13:07:09.271265] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.856 [2024-07-15 13:07:09.271274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.856 [2024-07-15 13:07:09.271281] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.856 [2024-07-15 13:07:09.271352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.856 [2024-07-15 13:07:09.271463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.856 [2024-07-15 13:07:09.271589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.856 [2024-07-15 13:07:09.271594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.114 [2024-07-15 13:07:09.326216] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:57.114 [2024-07-15 13:07:09.326644] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:57.114 [2024-07-15 13:07:09.326685] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:57.114 [2024-07-15 13:07:09.327057] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:22:57.114 [2024-07-15 13:07:09.327796] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:57.114 [2024-07-15 13:07:09.409113] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:57.114 [2024-07-15 13:07:09.469441] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:22:57.114 13:07:09 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:22:59.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:01.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:03.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:05.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:07.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # nvmfcleanup 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.864 rmmod nvme_tcp 00:23:07.864 rmmod nvme_fabrics 00:23:07.864 rmmod nvme_keyring 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # '[' -n 98441 ']' 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # killprocess 98441 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 98441 ']' 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 98441 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98441 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:07.864 killing process with pid 98441 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98441' 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 98441 00:23:07.864 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 98441 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@282 -- # remove_spdk_ns 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:23:08.121 00:23:08.121 real 0m12.033s 00:23:08.121 user 0m37.966s 00:23:08.121 sys 0m6.492s 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.121 ************************************ 00:23:08.121 END TEST nvmf_connect_disconnect 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:08.121 ************************************ 00:23:08.121 13:07:20 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:23:08.122 13:07:20 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:23:08.122 13:07:20 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:08.122 13:07:20 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.122 13:07:20 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:08.380 ************************************ 00:23:08.380 START TEST nvmf_multitarget 00:23:08.380 ************************************ 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:23:08.380 * Looking for test storage... 00:23:08.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.380 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@452 -- # prepare_net_devs 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@414 -- # local -g is_hw=no 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@416 -- # remove_spdk_ns 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@436 -- # nvmf_veth_init 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:23:08.381 Cannot find device "nvmf_tgt_br" 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:23:08.381 Cannot find device "nvmf_tgt_br2" 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@160 -- # true 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:23:08.381 Cannot find device "nvmf_tgt_br" 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:23:08.381 Cannot find device "nvmf_tgt_br2" 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:08.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:08.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:23:08.381 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:23:08.639 13:07:20 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:08.639 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:08.639 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:08.639 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:08.639 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:23:08.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:23:08.639 00:23:08.639 --- 10.0.0.2 ping statistics --- 00:23:08.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.640 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:23:08.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:08.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:23:08.640 00:23:08.640 --- 10.0.0.3 ping statistics --- 00:23:08.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.640 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:08.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:23:08.640 00:23:08.640 --- 10.0.0.1 ping statistics --- 00:23:08.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.640 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@437 -- # return 0 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@485 -- # nvmfpid=98804 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@486 -- # waitforlisten 98804 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 98804 ']' 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.640 13:07:21 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:23:08.898 [2024-07-15 13:07:21.135335] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:08.898 [2024-07-15 13:07:21.136658] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:23:08.898 [2024-07-15 13:07:21.136731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.898 [2024-07-15 13:07:21.273786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.898 [2024-07-15 13:07:21.333854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.898 [2024-07-15 13:07:21.333904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.898 [2024-07-15 13:07:21.333915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.898 [2024-07-15 13:07:21.333923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.898 [2024-07-15 13:07:21.333931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.898 [2024-07-15 13:07:21.334062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.898 [2024-07-15 13:07:21.334156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.898 [2024-07-15 13:07:21.334668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.898 [2024-07-15 13:07:21.334699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.156 [2024-07-15 13:07:21.391294] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:09.156 [2024-07-15 13:07:21.391410] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:09.156 [2024-07-15 13:07:21.391612] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:09.156 [2024-07-15 13:07:21.391873] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:09.156 [2024-07-15 13:07:21.391946] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:23:09.721 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.722 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:23:09.722 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:23:09.722 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.722 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:23:09.722 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.722 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:09.722 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:23:09.722 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:23:09.979 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:23:09.979 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:23:09.979 "nvmf_tgt_1" 00:23:09.979 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:23:10.237 "nvmf_tgt_2" 00:23:10.237 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:23:10.237 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:23:10.495 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:23:10.495 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:23:10.495 true 00:23:10.495 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:23:10.495 true 00:23:10.754 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:23:10.754 13:07:22 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@492 -- # nvmfcleanup 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.754 rmmod nvme_tcp 00:23:10.754 rmmod nvme_fabrics 00:23:10.754 rmmod nvme_keyring 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@493 -- # '[' -n 98804 ']' 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@494 -- # killprocess 98804 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 98804 ']' 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 98804 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.754 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98804 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:11.014 killing process with pid 98804 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98804' 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 98804 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 98804 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@282 -- # remove_spdk_ns 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:23:11.014 00:23:11.014 real 0m2.872s 00:23:11.014 user 0m1.839s 00:23:11.014 sys 0m0.616s 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:11.014 13:07:23 nvmf_tcp_interrupt_mode.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:23:11.014 ************************************ 00:23:11.014 END TEST nvmf_multitarget 00:23:11.014 ************************************ 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:11.289 ************************************ 00:23:11.289 START TEST nvmf_rpc 00:23:11.289 ************************************ 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:23:11.289 * Looking for test storage... 00:23:11.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.289 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@452 -- # prepare_net_devs 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@414 -- # local -g is_hw=no 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@416 -- # remove_spdk_ns 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@436 -- # nvmf_veth_init 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:23:11.290 Cannot find device "nvmf_tgt_br" 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:23:11.290 Cannot find device "nvmf_tgt_br2" 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@160 -- # true 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:23:11.290 Cannot find device "nvmf_tgt_br" 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:23:11.290 Cannot find device "nvmf_tgt_br2" 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:11.290 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:23:11.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:11.567 00:23:11.567 --- 10.0.0.2 ping statistics --- 00:23:11.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.567 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:23:11.567 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:11.567 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:23:11.567 00:23:11.567 --- 10.0.0.3 ping statistics --- 00:23:11.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.567 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:11.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:23:11.567 00:23:11.567 --- 10.0.0.1 ping statistics --- 00:23:11.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.567 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@437 -- # return 0 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@485 -- # nvmfpid=99030 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@486 -- # waitforlisten 99030 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 99030 ']' 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.567 13:07:23 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:11.824 [2024-07-15 13:07:24.044529] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:11.824 [2024-07-15 13:07:24.045887] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:23:11.824 [2024-07-15 13:07:24.045953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.825 [2024-07-15 13:07:24.181218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:11.825 [2024-07-15 13:07:24.243042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.825 [2024-07-15 13:07:24.243089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.825 [2024-07-15 13:07:24.243102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.825 [2024-07-15 13:07:24.243110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.825 [2024-07-15 13:07:24.243117] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.825 [2024-07-15 13:07:24.243233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.825 [2024-07-15 13:07:24.243719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.825 [2024-07-15 13:07:24.243792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.825 [2024-07-15 13:07:24.243797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.081 [2024-07-15 13:07:24.306650] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:12.081 [2024-07-15 13:07:24.306887] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:12.081 [2024-07-15 13:07:24.307466] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:23:12.081 [2024-07-15 13:07:24.307653] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:12.081 [2024-07-15 13:07:24.308128] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:23:12.647 "poll_groups": [ 00:23:12.647 { 00:23:12.647 "admin_qpairs": 0, 00:23:12.647 "completed_nvme_io": 0, 00:23:12.647 "current_admin_qpairs": 0, 00:23:12.647 "current_io_qpairs": 0, 00:23:12.647 "io_qpairs": 0, 00:23:12.647 "name": "nvmf_tgt_poll_group_000", 00:23:12.647 "pending_bdev_io": 0, 00:23:12.647 "transports": [] 00:23:12.647 }, 00:23:12.647 { 00:23:12.647 "admin_qpairs": 0, 00:23:12.647 "completed_nvme_io": 0, 00:23:12.647 "current_admin_qpairs": 0, 00:23:12.647 "current_io_qpairs": 0, 00:23:12.647 "io_qpairs": 0, 00:23:12.647 "name": "nvmf_tgt_poll_group_001", 00:23:12.647 "pending_bdev_io": 0, 00:23:12.647 "transports": [] 00:23:12.647 }, 00:23:12.647 { 00:23:12.647 "admin_qpairs": 0, 00:23:12.647 "completed_nvme_io": 0, 00:23:12.647 "current_admin_qpairs": 0, 00:23:12.647 "current_io_qpairs": 0, 00:23:12.647 "io_qpairs": 0, 00:23:12.647 "name": "nvmf_tgt_poll_group_002", 00:23:12.647 "pending_bdev_io": 0, 00:23:12.647 "transports": [] 00:23:12.647 }, 00:23:12.647 { 00:23:12.647 "admin_qpairs": 0, 00:23:12.647 "completed_nvme_io": 0, 00:23:12.647 "current_admin_qpairs": 0, 00:23:12.647 "current_io_qpairs": 0, 00:23:12.647 "io_qpairs": 0, 00:23:12.647 "name": "nvmf_tgt_poll_group_003", 00:23:12.647 "pending_bdev_io": 0, 00:23:12.647 "transports": [] 00:23:12.647 } 00:23:12.647 ], 00:23:12.647 "tick_rate": 2200000000 00:23:12.647 }' 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:23:12.647 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:12.905 [2024-07-15 13:07:25.200744] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:23:12.905 "poll_groups": [ 00:23:12.905 { 00:23:12.905 "admin_qpairs": 0, 00:23:12.905 "completed_nvme_io": 0, 00:23:12.905 "current_admin_qpairs": 0, 00:23:12.905 "current_io_qpairs": 0, 00:23:12.905 "io_qpairs": 0, 00:23:12.905 "name": "nvmf_tgt_poll_group_000", 00:23:12.905 "pending_bdev_io": 0, 00:23:12.905 "transports": [ 00:23:12.905 { 00:23:12.905 "trtype": "TCP" 00:23:12.905 } 00:23:12.905 ] 00:23:12.905 }, 00:23:12.905 { 00:23:12.905 "admin_qpairs": 0, 00:23:12.905 "completed_nvme_io": 0, 00:23:12.905 "current_admin_qpairs": 0, 00:23:12.905 "current_io_qpairs": 0, 00:23:12.905 "io_qpairs": 0, 00:23:12.905 "name": "nvmf_tgt_poll_group_001", 00:23:12.905 "pending_bdev_io": 0, 00:23:12.905 "transports": [ 00:23:12.905 { 00:23:12.905 "trtype": "TCP" 00:23:12.905 } 00:23:12.905 ] 00:23:12.905 }, 00:23:12.905 { 00:23:12.905 "admin_qpairs": 0, 00:23:12.905 "completed_nvme_io": 0, 00:23:12.905 "current_admin_qpairs": 0, 00:23:12.905 "current_io_qpairs": 0, 00:23:12.905 "io_qpairs": 0, 00:23:12.905 "name": "nvmf_tgt_poll_group_002", 00:23:12.905 "pending_bdev_io": 0, 00:23:12.905 "transports": [ 00:23:12.905 { 00:23:12.905 "trtype": "TCP" 00:23:12.905 } 00:23:12.905 ] 00:23:12.905 }, 00:23:12.905 { 00:23:12.905 "admin_qpairs": 0, 00:23:12.905 "completed_nvme_io": 0, 00:23:12.905 "current_admin_qpairs": 0, 00:23:12.905 "current_io_qpairs": 0, 00:23:12.905 "io_qpairs": 0, 00:23:12.905 "name": "nvmf_tgt_poll_group_003", 00:23:12.905 "pending_bdev_io": 0, 00:23:12.905 "transports": [ 00:23:12.905 { 00:23:12.905 "trtype": "TCP" 00:23:12.905 } 00:23:12.905 ] 00:23:12.905 } 00:23:12.905 ], 00:23:12.905 "tick_rate": 2200000000 00:23:12.905 }' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.905 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:13.163 Malloc1 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:13.163 [2024-07-15 13:07:25.404709] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.2 -s 4420 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.2 -s 4420 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.2 -s 4420 00:23:13.163 [2024-07-15 13:07:25.424820] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a' 00:23:13.163 Failed to write to /dev/nvme-fabrics: Input/output error 00:23:13.163 could not add new controller: failed to write to nvme-fabrics device 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:23:13.163 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:13.164 13:07:25 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:23:15.096 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:15.096 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:15.096 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:15.096 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:15.096 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:15.096 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:23:15.096 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:15.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:15.096 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:15.096 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:23:15.354 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:15.354 [2024-07-15 13:07:27.615630] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a' 00:23:15.354 Failed to write to /dev/nvme-fabrics: Input/output error 00:23:15.355 could not add new controller: failed to write to nvme-fabrics device 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:15.355 13:07:27 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:23:17.252 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:17.252 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:17.252 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:17.252 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:17.253 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:17.253 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:23:17.253 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:17.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:17.530 [2024-07-15 13:07:29.800730] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.530 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:17.531 13:07:29 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:23:19.428 13:07:31 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:19.428 13:07:31 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:19.428 13:07:31 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:19.691 13:07:31 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:19.691 13:07:31 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:19.691 13:07:31 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:23:19.691 13:07:31 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:19.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:19.691 [2024-07-15 13:07:32.092618] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.691 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:19.692 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.692 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:19.692 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.692 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:19.961 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:23:19.961 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:23:19.961 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:19.962 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:19.962 13:07:32 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:23:21.862 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:21.862 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:21.862 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:21.862 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:21.862 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:21.862 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:23:21.862 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:21.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:21.862 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:21.862 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:21.863 [2024-07-15 13:07:34.300588] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.863 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:22.120 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:23:22.120 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:23:22.120 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:22.120 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:22.120 13:07:34 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:24.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.015 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:24.273 [2024-07-15 13:07:36.508606] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:24.273 13:07:36 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:23:26.191 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:26.191 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:26.191 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:26.191 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:26.191 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:26.191 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:23:26.191 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:26.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:26.191 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:26.191 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:26.449 [2024-07-15 13:07:38.712650] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:26.449 13:07:38 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:23:28.352 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:28.352 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:28.352 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:28.620 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:28.620 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:28.620 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:23:28.620 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:28.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:28.620 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:28.620 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:23:28.620 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:28.620 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 [2024-07-15 13:07:40.920667] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 [2024-07-15 13:07:40.976692] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:40 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 [2024-07-15 13:07:41.040665] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.621 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 [2024-07-15 13:07:41.108783] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 [2024-07-15 13:07:41.160869] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:23:28.880 "poll_groups": [ 00:23:28.880 { 00:23:28.880 "admin_qpairs": 2, 00:23:28.880 "completed_nvme_io": 66, 00:23:28.880 "current_admin_qpairs": 0, 00:23:28.880 "current_io_qpairs": 0, 00:23:28.880 "io_qpairs": 16, 00:23:28.880 "name": "nvmf_tgt_poll_group_000", 00:23:28.880 "pending_bdev_io": 0, 00:23:28.880 "transports": [ 00:23:28.880 { 00:23:28.880 "trtype": "TCP" 00:23:28.880 } 00:23:28.880 ] 00:23:28.880 }, 00:23:28.880 { 00:23:28.880 "admin_qpairs": 3, 00:23:28.880 "completed_nvme_io": 67, 00:23:28.880 "current_admin_qpairs": 0, 00:23:28.880 "current_io_qpairs": 0, 00:23:28.880 "io_qpairs": 17, 00:23:28.880 "name": "nvmf_tgt_poll_group_001", 00:23:28.880 "pending_bdev_io": 0, 00:23:28.880 "transports": [ 00:23:28.880 { 00:23:28.880 "trtype": "TCP" 00:23:28.880 } 00:23:28.880 ] 00:23:28.880 }, 00:23:28.880 { 00:23:28.880 "admin_qpairs": 1, 00:23:28.880 "completed_nvme_io": 120, 00:23:28.880 "current_admin_qpairs": 0, 00:23:28.880 "current_io_qpairs": 0, 00:23:28.880 "io_qpairs": 19, 00:23:28.880 "name": "nvmf_tgt_poll_group_002", 00:23:28.880 "pending_bdev_io": 0, 00:23:28.880 "transports": [ 00:23:28.880 { 00:23:28.880 "trtype": "TCP" 00:23:28.880 } 00:23:28.880 ] 00:23:28.880 }, 00:23:28.880 { 00:23:28.880 "admin_qpairs": 1, 00:23:28.880 "completed_nvme_io": 167, 00:23:28.880 "current_admin_qpairs": 0, 00:23:28.880 "current_io_qpairs": 0, 00:23:28.880 "io_qpairs": 18, 00:23:28.880 "name": "nvmf_tgt_poll_group_003", 00:23:28.880 "pending_bdev_io": 0, 00:23:28.880 "transports": [ 00:23:28.880 { 00:23:28.880 "trtype": "TCP" 00:23:28.880 } 00:23:28.880 ] 00:23:28.880 } 00:23:28.880 ], 00:23:28.880 "tick_rate": 2200000000 00:23:28.880 }' 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:23:28.880 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@492 -- # nvmfcleanup 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.139 rmmod nvme_tcp 00:23:29.139 rmmod nvme_fabrics 00:23:29.139 rmmod nvme_keyring 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@493 -- # '[' -n 99030 ']' 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@494 -- # killprocess 99030 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 99030 ']' 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 99030 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99030 00:23:29.139 killing process with pid 99030 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99030' 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 99030 00:23:29.139 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 99030 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@282 -- # remove_spdk_ns 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:23:29.398 00:23:29.398 real 0m18.170s 00:23:29.398 user 0m57.962s 00:23:29.398 sys 0m8.044s 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:29.398 ************************************ 00:23:29.398 END TEST nvmf_rpc 00:23:29.398 ************************************ 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:29.398 ************************************ 00:23:29.398 START TEST nvmf_invalid 00:23:29.398 ************************************ 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:23:29.398 * Looking for test storage... 00:23:29.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.398 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@452 -- # prepare_net_devs 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@414 -- # local -g is_hw=no 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@416 -- # remove_spdk_ns 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@436 -- # nvmf_veth_init 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:23:29.399 Cannot find device "nvmf_tgt_br" 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.399 Cannot find device "nvmf_tgt_br2" 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@160 -- # true 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:23:29.399 Cannot find device "nvmf_tgt_br" 00:23:29.399 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:23:29.660 Cannot find device "nvmf_tgt_br2" 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.660 13:07:41 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:23:29.660 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.661 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:23:29.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:23:29.918 00:23:29.918 --- 10.0.0.2 ping statistics --- 00:23:29.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.918 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:23:29.918 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.918 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:23:29.918 00:23:29.918 --- 10.0.0.3 ping statistics --- 00:23:29.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.918 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:29.918 00:23:29.918 --- 10.0.0.1 ping statistics --- 00:23:29.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.918 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@437 -- # return 0 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@485 -- # nvmfpid=99522 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@486 -- # waitforlisten 99522 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 99522 ']' 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.918 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:23:29.918 [2024-07-15 13:07:42.257074] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:29.918 [2024-07-15 13:07:42.258718] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:23:29.918 [2024-07-15 13:07:42.258844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.176 [2024-07-15 13:07:42.401821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.176 [2024-07-15 13:07:42.463110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.176 [2024-07-15 13:07:42.463191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.176 [2024-07-15 13:07:42.463212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.176 [2024-07-15 13:07:42.463227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.176 [2024-07-15 13:07:42.463240] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.176 [2024-07-15 13:07:42.463528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.176 [2024-07-15 13:07:42.463623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.176 [2024-07-15 13:07:42.464128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.176 [2024-07-15 13:07:42.464139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.176 [2024-07-15 13:07:42.520103] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:30.176 [2024-07-15 13:07:42.520420] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:30.176 [2024-07-15 13:07:42.520582] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:30.176 [2024-07-15 13:07:42.520867] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:23:30.176 [2024-07-15 13:07:42.523234] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:30.176 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.176 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:23:30.176 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:23:30.176 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.176 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:23:30.176 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.176 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:30.176 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25698 00:23:30.433 [2024-07-15 13:07:42.896759] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:23:30.691 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 13:07:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25698 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:23:30.691 request: 00:23:30.691 { 00:23:30.691 "method": "nvmf_create_subsystem", 00:23:30.691 "params": { 00:23:30.691 "nqn": "nqn.2016-06.io.spdk:cnode25698", 00:23:30.691 "tgt_name": "foobar" 00:23:30.691 } 00:23:30.691 } 00:23:30.691 Got JSON-RPC error response 00:23:30.691 GoRPCClient: error on JSON-RPC call' 00:23:30.691 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 13:07:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25698 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:23:30.691 request: 00:23:30.691 { 00:23:30.691 "method": "nvmf_create_subsystem", 00:23:30.691 "params": { 00:23:30.691 "nqn": "nqn.2016-06.io.spdk:cnode25698", 00:23:30.691 "tgt_name": "foobar" 00:23:30.691 } 00:23:30.691 } 00:23:30.691 Got JSON-RPC error response 00:23:30.691 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:23:30.691 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:23:30.691 13:07:42 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29549 00:23:30.948 [2024-07-15 13:07:43.252800] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29549: invalid serial number 'SPDKISFASTANDAWESOME' 00:23:30.948 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 13:07:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29549 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:23:30.948 request: 00:23:30.948 { 00:23:30.948 "method": "nvmf_create_subsystem", 00:23:30.948 "params": { 00:23:30.948 "nqn": "nqn.2016-06.io.spdk:cnode29549", 00:23:30.948 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:23:30.948 } 00:23:30.948 } 00:23:30.948 Got JSON-RPC error response 00:23:30.948 GoRPCClient: error on JSON-RPC call' 00:23:30.948 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 13:07:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29549 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:23:30.948 request: 00:23:30.948 { 00:23:30.948 "method": "nvmf_create_subsystem", 00:23:30.948 "params": { 00:23:30.948 "nqn": "nqn.2016-06.io.spdk:cnode29549", 00:23:30.948 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:23:30.948 } 00:23:30.948 } 00:23:30.948 Got JSON-RPC error response 00:23:30.948 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:23:30.948 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:23:30.948 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11305 00:23:31.206 [2024-07-15 13:07:43.528871] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11305: invalid model number 'SPDK_Controller' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 13:07:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode11305], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:23:31.206 request: 00:23:31.206 { 00:23:31.206 "method": "nvmf_create_subsystem", 00:23:31.206 "params": { 00:23:31.206 "nqn": "nqn.2016-06.io.spdk:cnode11305", 00:23:31.206 "model_number": "SPDK_Controller\u001f" 00:23:31.206 } 00:23:31.206 } 00:23:31.206 Got JSON-RPC error response 00:23:31.206 GoRPCClient: error on JSON-RPC call' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 13:07:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode11305], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:23:31.206 request: 00:23:31.206 { 00:23:31.206 "method": "nvmf_create_subsystem", 00:23:31.206 "params": { 00:23:31.206 "nqn": "nqn.2016-06.io.spdk:cnode11305", 00:23:31.206 "model_number": "SPDK_Controller\u001f" 00:23:31.206 } 00:23:31.206 } 00:23:31.206 Got JSON-RPC error response 00:23:31.206 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:23:31.206 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@31 -- # echo 'J9wj+dZ:\OE&vsJ|.%ic)' 00:23:31.207 13:07:43 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'J9wj+dZ:\OE&vsJ|.%ic)' nqn.2016-06.io.spdk:cnode15153 00:23:31.773 [2024-07-15 13:07:44.032792] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15153: invalid serial number 'J9wj+dZ:\OE&vsJ|.%ic)' 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 13:07:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15153 serial_number:J9wj+dZ:\OE&vsJ|.%ic)], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN J9wj+dZ:\OE&vsJ|.%ic) 00:23:31.773 request: 00:23:31.773 { 00:23:31.773 "method": "nvmf_create_subsystem", 00:23:31.773 "params": { 00:23:31.773 "nqn": "nqn.2016-06.io.spdk:cnode15153", 00:23:31.773 "serial_number": "J9wj+dZ:\\OE&vsJ|.%ic)" 00:23:31.773 } 00:23:31.773 } 00:23:31.773 Got JSON-RPC error response 00:23:31.773 GoRPCClient: error on JSON-RPC call' 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 13:07:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15153 serial_number:J9wj+dZ:\OE&vsJ|.%ic)], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN J9wj+dZ:\OE&vsJ|.%ic) 00:23:31.773 request: 00:23:31.773 { 00:23:31.773 "method": "nvmf_create_subsystem", 00:23:31.773 "params": { 00:23:31.773 "nqn": "nqn.2016-06.io.spdk:cnode15153", 00:23:31.773 "serial_number": "J9wj+dZ:\\OE&vsJ|.%ic)" 00:23:31.773 } 00:23:31.773 } 00:23:31.773 Got JSON-RPC error response 00:23:31.773 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:23:31.773 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:23:31.774 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@31 -- # echo '+qM+yiP<}P"AFg8HF'\''lVAD(?Wmh7!BNcddG+p4BRm' 00:23:31.775 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '+qM+yiP<}P"AFg8HF'\''lVAD(?Wmh7!BNcddG+p4BRm' nqn.2016-06.io.spdk:cnode29294 00:23:32.033 [2024-07-15 13:07:44.492849] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29294: invalid model number '+qM+yiP<}P"AFg8HF'lVAD(?Wmh7!BNcddG+p4BRm' 00:23:32.290 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/15 13:07:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:+qM+yiP<}P"AFg8HF'\''lVAD(?Wmh7!BNcddG+p4BRm nqn:nqn.2016-06.io.spdk:cnode29294], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN +qM+yiP<}P"AFg8HF'\''lVAD(?Wmh7!BNcddG+p4BRm 00:23:32.290 request: 00:23:32.290 { 00:23:32.290 "method": "nvmf_create_subsystem", 00:23:32.290 "params": { 00:23:32.290 "nqn": "nqn.2016-06.io.spdk:cnode29294", 00:23:32.290 "model_number": "+qM+yiP<}P\"AFg8HF'\''lVAD(?Wmh7!BNcddG+p4BRm" 00:23:32.290 } 00:23:32.291 } 00:23:32.291 Got JSON-RPC error response 00:23:32.291 GoRPCClient: error on JSON-RPC call' 00:23:32.291 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/15 13:07:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:+qM+yiP<}P"AFg8HF'lVAD(?Wmh7!BNcddG+p4BRm nqn:nqn.2016-06.io.spdk:cnode29294], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN +qM+yiP<}P"AFg8HF'lVAD(?Wmh7!BNcddG+p4BRm 00:23:32.291 request: 00:23:32.291 { 00:23:32.291 "method": "nvmf_create_subsystem", 00:23:32.291 "params": { 00:23:32.291 "nqn": "nqn.2016-06.io.spdk:cnode29294", 00:23:32.291 "model_number": "+qM+yiP<}P\"AFg8HF'lVAD(?Wmh7!BNcddG+p4BRm" 00:23:32.291 } 00:23:32.291 } 00:23:32.291 Got JSON-RPC error response 00:23:32.291 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:23:32.291 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:23:32.548 [2024-07-15 13:07:44.773003] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.548 13:07:44 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:23:32.806 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:23:32.806 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:23:32.806 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:23:32.806 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:23:32.806 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:23:33.063 [2024-07-15 13:07:45.449049] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:23:33.063 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/15 13:07:45 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:23:33.063 request: 00:23:33.063 { 00:23:33.063 "method": "nvmf_subsystem_remove_listener", 00:23:33.063 "params": { 00:23:33.063 "nqn": "nqn.2016-06.io.spdk:cnode", 00:23:33.063 "listen_address": { 00:23:33.063 "trtype": "tcp", 00:23:33.063 "traddr": "", 00:23:33.063 "trsvcid": "4421" 00:23:33.063 } 00:23:33.063 } 00:23:33.063 } 00:23:33.063 Got JSON-RPC error response 00:23:33.063 GoRPCClient: error on JSON-RPC call' 00:23:33.063 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/15 13:07:45 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:23:33.063 request: 00:23:33.063 { 00:23:33.063 "method": "nvmf_subsystem_remove_listener", 00:23:33.063 "params": { 00:23:33.063 "nqn": "nqn.2016-06.io.spdk:cnode", 00:23:33.063 "listen_address": { 00:23:33.063 "trtype": "tcp", 00:23:33.063 "traddr": "", 00:23:33.063 "trsvcid": "4421" 00:23:33.063 } 00:23:33.063 } 00:23:33.063 } 00:23:33.063 Got JSON-RPC error response 00:23:33.063 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:23:33.063 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21923 -i 0 00:23:33.322 [2024-07-15 13:07:45.728877] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21923: invalid cntlid range [0-65519] 00:23:33.322 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/15 13:07:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21923], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:23:33.322 request: 00:23:33.322 { 00:23:33.322 "method": "nvmf_create_subsystem", 00:23:33.322 "params": { 00:23:33.322 "nqn": "nqn.2016-06.io.spdk:cnode21923", 00:23:33.322 "min_cntlid": 0 00:23:33.322 } 00:23:33.322 } 00:23:33.322 Got JSON-RPC error response 00:23:33.322 GoRPCClient: error on JSON-RPC call' 00:23:33.322 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/15 13:07:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21923], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:23:33.322 request: 00:23:33.322 { 00:23:33.322 "method": "nvmf_create_subsystem", 00:23:33.322 "params": { 00:23:33.322 "nqn": "nqn.2016-06.io.spdk:cnode21923", 00:23:33.322 "min_cntlid": 0 00:23:33.322 } 00:23:33.322 } 00:23:33.322 Got JSON-RPC error response 00:23:33.322 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:23:33.322 13:07:45 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5200 -i 65520 00:23:33.579 [2024-07-15 13:07:45.980858] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5200: invalid cntlid range [65520-65519] 00:23:33.580 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/15 13:07:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5200], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:23:33.580 request: 00:23:33.580 { 00:23:33.580 "method": "nvmf_create_subsystem", 00:23:33.580 "params": { 00:23:33.580 "nqn": "nqn.2016-06.io.spdk:cnode5200", 00:23:33.580 "min_cntlid": 65520 00:23:33.580 } 00:23:33.580 } 00:23:33.580 Got JSON-RPC error response 00:23:33.580 GoRPCClient: error on JSON-RPC call' 00:23:33.580 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/15 13:07:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5200], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:23:33.580 request: 00:23:33.580 { 00:23:33.580 "method": "nvmf_create_subsystem", 00:23:33.580 "params": { 00:23:33.580 "nqn": "nqn.2016-06.io.spdk:cnode5200", 00:23:33.580 "min_cntlid": 65520 00:23:33.580 } 00:23:33.580 } 00:23:33.580 Got JSON-RPC error response 00:23:33.580 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:23:33.580 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3102 -I 0 00:23:33.837 [2024-07-15 13:07:46.228845] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3102: invalid cntlid range [1-0] 00:23:33.837 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/15 13:07:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3102], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:23:33.837 request: 00:23:33.837 { 00:23:33.837 "method": "nvmf_create_subsystem", 00:23:33.837 "params": { 00:23:33.837 "nqn": "nqn.2016-06.io.spdk:cnode3102", 00:23:33.837 "max_cntlid": 0 00:23:33.837 } 00:23:33.837 } 00:23:33.837 Got JSON-RPC error response 00:23:33.837 GoRPCClient: error on JSON-RPC call' 00:23:33.837 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/15 13:07:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3102], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:23:33.837 request: 00:23:33.837 { 00:23:33.837 "method": "nvmf_create_subsystem", 00:23:33.837 "params": { 00:23:33.837 "nqn": "nqn.2016-06.io.spdk:cnode3102", 00:23:33.837 "max_cntlid": 0 00:23:33.837 } 00:23:33.837 } 00:23:33.837 Got JSON-RPC error response 00:23:33.837 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:23:33.837 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3899 -I 65520 00:23:34.095 [2024-07-15 13:07:46.540733] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3899: invalid cntlid range [1-65520] 00:23:34.352 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/15 13:07:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3899], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:23:34.352 request: 00:23:34.352 { 00:23:34.352 "method": "nvmf_create_subsystem", 00:23:34.352 "params": { 00:23:34.352 "nqn": "nqn.2016-06.io.spdk:cnode3899", 00:23:34.352 "max_cntlid": 65520 00:23:34.352 } 00:23:34.352 } 00:23:34.352 Got JSON-RPC error response 00:23:34.352 GoRPCClient: error on JSON-RPC call' 00:23:34.352 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/15 13:07:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3899], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:23:34.352 request: 00:23:34.352 { 00:23:34.352 "method": "nvmf_create_subsystem", 00:23:34.352 "params": { 00:23:34.352 "nqn": "nqn.2016-06.io.spdk:cnode3899", 00:23:34.352 "max_cntlid": 65520 00:23:34.352 } 00:23:34.352 } 00:23:34.352 Got JSON-RPC error response 00:23:34.352 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:23:34.353 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12412 -i 6 -I 5 00:23:34.353 [2024-07-15 13:07:46.812948] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12412: invalid cntlid range [6-5] 00:23:34.611 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/15 13:07:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode12412], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:23:34.611 request: 00:23:34.611 { 00:23:34.611 "method": "nvmf_create_subsystem", 00:23:34.611 "params": { 00:23:34.611 "nqn": "nqn.2016-06.io.spdk:cnode12412", 00:23:34.611 "min_cntlid": 6, 00:23:34.611 "max_cntlid": 5 00:23:34.611 } 00:23:34.611 } 00:23:34.611 Got JSON-RPC error response 00:23:34.611 GoRPCClient: error on JSON-RPC call' 00:23:34.611 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/15 13:07:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode12412], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:23:34.611 request: 00:23:34.611 { 00:23:34.611 "method": "nvmf_create_subsystem", 00:23:34.611 "params": { 00:23:34.611 "nqn": "nqn.2016-06.io.spdk:cnode12412", 00:23:34.611 "min_cntlid": 6, 00:23:34.611 "max_cntlid": 5 00:23:34.611 } 00:23:34.611 } 00:23:34.611 Got JSON-RPC error response 00:23:34.611 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:23:34.611 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:23:34.611 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:23:34.611 { 00:23:34.611 "name": "foobar", 00:23:34.611 "method": "nvmf_delete_target", 00:23:34.611 "req_id": 1 00:23:34.611 } 00:23:34.611 Got JSON-RPC error response 00:23:34.611 response: 00:23:34.611 { 00:23:34.611 "code": -32602, 00:23:34.611 "message": "The specified target doesn'\''t exist, cannot delete it." 00:23:34.611 }' 00:23:34.611 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:23:34.611 { 00:23:34.611 "name": "foobar", 00:23:34.611 "method": "nvmf_delete_target", 00:23:34.611 "req_id": 1 00:23:34.611 } 00:23:34.611 Got JSON-RPC error response 00:23:34.611 response: 00:23:34.611 { 00:23:34.611 "code": -32602, 00:23:34.611 "message": "The specified target doesn't exist, cannot delete it." 00:23:34.611 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:23:34.611 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:23:34.611 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:23:34.611 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@492 -- # nvmfcleanup 00:23:34.611 13:07:46 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:34.611 rmmod nvme_tcp 00:23:34.611 rmmod nvme_fabrics 00:23:34.611 rmmod nvme_keyring 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@493 -- # '[' -n 99522 ']' 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@494 -- # killprocess 99522 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 99522 ']' 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 99522 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:34.611 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99522 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:34.869 killing process with pid 99522 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99522' 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 99522 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 99522 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@282 -- # remove_spdk_ns 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:23:34.869 00:23:34.869 real 0m5.590s 00:23:34.869 user 0m4.327s 00:23:34.869 sys 0m1.156s 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:34.869 13:07:47 nvmf_tcp_interrupt_mode.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:23:34.869 ************************************ 00:23:34.869 END TEST nvmf_invalid 00:23:34.869 ************************************ 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:35.128 ************************************ 00:23:35.128 START TEST nvmf_abort 00:23:35.128 ************************************ 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:23:35.128 * Looking for test storage... 00:23:35.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@452 -- # prepare_net_devs 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # local -g is_hw=no 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # remove_spdk_ns 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:23:35.128 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # nvmf_veth_init 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:23:35.129 Cannot find device "nvmf_tgt_br" 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # true 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:23:35.129 Cannot find device "nvmf_tgt_br2" 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # true 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:23:35.129 Cannot find device "nvmf_tgt_br" 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:23:35.129 Cannot find device "nvmf_tgt_br2" 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:35.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:35.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:35.129 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:23:35.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:23:35.388 00:23:35.388 --- 10.0.0.2 ping statistics --- 00:23:35.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.388 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:23:35.388 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:35.388 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:23:35.388 00:23:35.388 --- 10.0.0.3 ping statistics --- 00:23:35.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.388 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:35.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:23:35.388 00:23:35.388 --- 10.0.0.1 ping statistics --- 00:23:35.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.388 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@437 -- # return 0 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@485 -- # nvmfpid=100016 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@486 -- # waitforlisten 100016 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 100016 ']' 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.388 13:07:47 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.646 [2024-07-15 13:07:47.870077] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:35.646 [2024-07-15 13:07:47.871180] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:23:35.646 [2024-07-15 13:07:47.872071] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.646 [2024-07-15 13:07:48.011518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:35.646 [2024-07-15 13:07:48.097467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.646 [2024-07-15 13:07:48.097779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.646 [2024-07-15 13:07:48.097983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.646 [2024-07-15 13:07:48.098268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.646 [2024-07-15 13:07:48.098425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.646 [2024-07-15 13:07:48.098671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.646 [2024-07-15 13:07:48.098830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.646 [2024-07-15 13:07:48.098838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.905 [2024-07-15 13:07:48.159044] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:35.905 [2024-07-15 13:07:48.159302] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:35.905 [2024-07-15 13:07:48.159740] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:35.905 [2024-07-15 13:07:48.159796] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.905 [2024-07-15 13:07:48.240473] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.905 Malloc0 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.905 Delay0 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.905 [2024-07-15 13:07:48.312705] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.905 13:07:48 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:23:36.163 [2024-07-15 13:07:48.472953] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:38.071 Initializing NVMe Controllers 00:23:38.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:23:38.071 controller IO queue size 128 less than required 00:23:38.071 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:23:38.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:38.071 Initialization complete. Launching workers. 00:23:38.071 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 30004 00:23:38.071 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30065, failed to submit 66 00:23:38.071 success 30004, unsuccess 61, failed 0 00:23:38.071 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:38.071 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.071 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:38.071 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.071 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:38.071 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:23:38.071 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # nvmfcleanup 00:23:38.071 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.328 rmmod nvme_tcp 00:23:38.328 rmmod nvme_fabrics 00:23:38.328 rmmod nvme_keyring 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # '[' -n 100016 ']' 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # killprocess 100016 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 100016 ']' 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 100016 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100016 00:23:38.328 killing process with pid 100016 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100016' 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@967 -- # kill 100016 00:23:38.328 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # wait 100016 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@282 -- # remove_spdk_ns 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:23:38.588 ************************************ 00:23:38.588 END TEST nvmf_abort 00:23:38.588 ************************************ 00:23:38.588 00:23:38.588 real 0m3.497s 00:23:38.588 user 0m8.350s 00:23:38.588 sys 0m1.571s 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:38.588 ************************************ 00:23:38.588 START TEST nvmf_ns_hotplug_stress 00:23:38.588 ************************************ 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:23:38.588 * Looking for test storage... 00:23:38.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.588 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.589 13:07:50 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@452 -- # prepare_net_devs 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # local -g is_hw=no 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # remove_spdk_ns 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # nvmf_veth_init 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:23:38.589 Cannot find device "nvmf_tgt_br" 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:23:38.589 Cannot find device "nvmf_tgt_br2" 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # true 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:23:38.589 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:23:38.861 Cannot find device "nvmf_tgt_br" 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:23:38.861 Cannot find device "nvmf_tgt_br2" 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:38.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:38.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:23:38.861 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:23:38.862 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:38.862 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:38.862 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:38.862 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:23:38.862 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:23:38.862 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:23:38.862 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:23:39.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:23:39.119 00:23:39.119 --- 10.0.0.2 ping statistics --- 00:23:39.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.119 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:23:39.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:39.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:23:39.119 00:23:39.119 --- 10.0.0.3 ping statistics --- 00:23:39.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.119 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:39.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:23:39.119 00:23:39.119 --- 10.0.0.1 ping statistics --- 00:23:39.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.119 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@437 -- # return 0 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # nvmfpid=100233 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # waitforlisten 100233 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 100233 ']' 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.119 13:07:51 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:39.119 [2024-07-15 13:07:51.452585] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:39.119 [2024-07-15 13:07:51.453644] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:23:39.119 [2024-07-15 13:07:51.453711] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.377 [2024-07-15 13:07:51.588148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:39.377 [2024-07-15 13:07:51.647453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.377 [2024-07-15 13:07:51.647509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.377 [2024-07-15 13:07:51.647520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.377 [2024-07-15 13:07:51.647528] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.377 [2024-07-15 13:07:51.647536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.377 [2024-07-15 13:07:51.647663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.377 [2024-07-15 13:07:51.648530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.377 [2024-07-15 13:07:51.648576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.377 [2024-07-15 13:07:51.696699] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:39.377 [2024-07-15 13:07:51.696740] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:39.377 [2024-07-15 13:07:51.696929] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:39.377 [2024-07-15 13:07:51.697052] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:40.323 13:07:52 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.323 13:07:52 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:23:40.323 13:07:52 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:23:40.323 13:07:52 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.323 13:07:52 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:40.323 13:07:52 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.323 13:07:52 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:23:40.323 13:07:52 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:40.580 [2024-07-15 13:07:52.829488] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.580 13:07:52 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:40.838 13:07:53 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.095 [2024-07-15 13:07:53.441707] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.095 13:07:53 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:41.352 13:07:53 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:23:41.611 Malloc0 00:23:41.611 13:07:53 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:41.869 Delay0 00:23:41.869 13:07:54 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:42.127 13:07:54 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:23:42.385 NULL1 00:23:42.385 13:07:54 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:23:42.643 13:07:55 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=100371 00:23:42.643 13:07:55 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:23:42.643 13:07:55 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:42.643 13:07:55 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:44.015 Read completed with error (sct=0, sc=11) 00:23:44.015 13:07:56 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:44.015 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:44.015 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:44.015 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:44.015 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:44.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:44.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:44.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:44.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:44.273 13:07:56 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:23:44.273 13:07:56 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:23:44.625 true 00:23:44.625 13:07:56 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:44.625 13:07:56 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:45.190 13:07:57 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:45.448 13:07:57 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:23:45.448 13:07:57 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:23:46.013 true 00:23:46.013 13:07:58 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:46.013 13:07:58 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:46.270 13:07:58 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:46.527 13:07:58 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:23:46.527 13:07:58 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:23:47.092 true 00:23:47.092 13:07:59 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:47.092 13:07:59 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:48.464 13:08:00 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:48.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:48.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:48.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:48.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:48.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:48.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:48.722 13:08:00 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:23:48.722 13:08:00 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:23:48.980 true 00:23:48.980 13:08:01 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:48.980 13:08:01 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:49.543 13:08:01 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:49.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:49.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:49.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:49.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:50.059 13:08:02 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:23:50.059 13:08:02 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:23:50.317 true 00:23:50.317 13:08:02 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:50.317 13:08:02 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:50.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:50.881 13:08:03 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:51.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:51.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:51.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:51.396 13:08:03 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:23:51.396 13:08:03 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:23:51.653 true 00:23:51.653 13:08:04 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:51.653 13:08:04 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:52.590 13:08:04 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:52.847 13:08:05 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:23:52.847 13:08:05 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:23:53.107 true 00:23:53.107 13:08:05 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:53.107 13:08:05 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:54.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:54.479 13:08:06 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:54.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:54.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:54.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:54.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:54.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:54.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:54.736 13:08:07 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:23:54.736 13:08:07 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:23:55.300 true 00:23:55.300 13:08:07 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:55.300 13:08:07 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:55.865 13:08:08 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:55.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:55.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:55.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:55.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:56.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:56.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:56.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:56.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:56.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:56.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:56.380 13:08:08 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:23:56.380 13:08:08 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:23:56.639 true 00:23:56.639 13:08:09 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:56.639 13:08:09 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:57.204 13:08:09 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:57.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:57.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:57.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:57.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:57.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:57.720 13:08:10 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:23:57.720 13:08:10 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:23:57.978 true 00:23:57.978 13:08:10 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:57.978 13:08:10 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:58.236 13:08:10 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:58.493 13:08:10 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:23:58.493 13:08:10 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:23:59.059 true 00:23:59.059 13:08:11 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:59.059 13:08:11 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:59.317 13:08:11 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:59.576 13:08:11 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:23:59.576 13:08:11 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:23:59.881 true 00:23:59.881 13:08:12 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:23:59.881 13:08:12 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:00.448 13:08:12 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:00.706 13:08:13 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:24:00.706 13:08:13 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:24:00.975 true 00:24:00.975 13:08:13 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:00.975 13:08:13 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:01.236 13:08:13 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:01.494 13:08:13 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:24:01.494 13:08:13 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:24:01.752 true 00:24:01.752 13:08:14 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:01.752 13:08:14 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:02.686 13:08:14 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:02.944 13:08:15 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:24:02.944 13:08:15 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:24:03.203 true 00:24:03.203 13:08:15 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:03.203 13:08:15 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:03.461 13:08:15 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:04.026 13:08:16 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:24:04.026 13:08:16 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:24:04.026 true 00:24:04.026 13:08:16 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:04.026 13:08:16 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:04.283 13:08:16 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:04.541 13:08:17 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:24:04.542 13:08:17 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:24:05.108 true 00:24:05.108 13:08:17 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:05.108 13:08:17 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:05.674 13:08:17 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:05.932 13:08:18 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:24:05.932 13:08:18 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:24:06.190 true 00:24:06.190 13:08:18 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:06.190 13:08:18 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:07.564 13:08:19 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:07.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:07.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:07.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:07.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:07.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:07.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:07.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:07.823 13:08:20 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:24:07.823 13:08:20 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:24:08.081 true 00:24:08.081 13:08:20 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:08.081 13:08:20 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:09.014 13:08:21 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:09.014 13:08:21 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:24:09.014 13:08:21 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:24:09.272 true 00:24:09.272 13:08:21 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:09.272 13:08:21 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:09.530 13:08:21 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:09.788 13:08:22 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:24:09.788 13:08:22 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:24:10.045 true 00:24:10.045 13:08:22 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:10.045 13:08:22 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:10.609 13:08:22 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:10.866 13:08:23 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:24:10.866 13:08:23 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:24:11.124 true 00:24:11.124 13:08:23 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:11.124 13:08:23 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:12.055 13:08:24 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:12.312 13:08:24 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:24:12.312 13:08:24 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:24:12.570 true 00:24:12.570 13:08:24 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:12.570 13:08:24 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:13.941 Initializing NVMe Controllers 00:24:13.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.941 Controller IO queue size 128, less than required. 00:24:13.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.941 Controller IO queue size 128, less than required. 00:24:13.942 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:13.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:13.942 Initialization complete. Launching workers. 00:24:13.942 ======================================================== 00:24:13.942 Latency(us) 00:24:13.942 Device Information : IOPS MiB/s Average min max 00:24:13.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1783.03 0.87 38866.15 3253.00 1304680.49 00:24:13.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8560.37 4.18 14953.04 3992.85 869338.14 00:24:13.942 ======================================================== 00:24:13.942 Total : 10343.40 5.05 19075.27 3253.00 1304680.49 00:24:13.942 00:24:13.942 13:08:26 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:14.199 13:08:26 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:24:14.199 13:08:26 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:24:14.456 true 00:24:14.456 13:08:26 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 100371 00:24:14.456 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (100371) - No such process 00:24:14.456 13:08:26 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 100371 00:24:14.456 13:08:26 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:14.713 13:08:27 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:15.277 13:08:27 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:24:15.277 13:08:27 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:24:15.277 13:08:27 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:24:15.277 13:08:27 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:15.277 13:08:27 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:24:15.535 null0 00:24:15.535 13:08:27 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:15.535 13:08:27 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:15.535 13:08:27 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:24:15.792 null1 00:24:15.792 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:15.792 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:15.792 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:24:16.049 null2 00:24:16.049 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:16.049 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:16.049 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:24:16.307 null3 00:24:16.307 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:16.307 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:16.307 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:24:16.565 null4 00:24:16.565 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:16.565 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:16.565 13:08:28 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:24:16.823 null5 00:24:17.080 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:17.080 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:17.080 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:24:17.339 null6 00:24:17.339 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:17.339 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:17.339 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:24:17.597 null7 00:24:17.597 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:24:17.597 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:24:17.597 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:24:17.597 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:17.597 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:17.597 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:24:17.597 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:17.597 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 101279 101280 101282 101284 101286 101288 101289 101292 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:17.598 13:08:29 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:17.862 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:17.862 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:17.862 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:17.862 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:17.862 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:17.862 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:17.862 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:17.862 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:18.427 13:08:30 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:18.685 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:18.686 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:18.943 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:18.943 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:18.943 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:18.943 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:18.943 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:18.943 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:19.201 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:19.459 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:19.459 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:19.459 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:19.459 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:19.459 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:19.459 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:19.459 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:19.459 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:19.459 13:08:31 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:19.717 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:19.717 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:19.717 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:19.717 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:19.717 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:19.717 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:19.976 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:19.976 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:19.976 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:19.976 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:20.235 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:20.235 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:20.235 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:20.235 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:20.235 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:20.235 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:20.235 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:20.235 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:20.236 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:20.236 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:20.236 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:20.236 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:20.236 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:20.494 13:08:32 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:20.752 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:20.752 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:20.752 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:20.752 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:20.752 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:21.010 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:21.010 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:21.011 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:21.011 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:21.268 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:21.527 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:21.527 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:21.527 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:21.527 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:21.527 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:21.527 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:21.527 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:21.527 13:08:33 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:21.785 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:21.785 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:21.785 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:21.785 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:21.785 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:21.785 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:21.785 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:22.043 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:22.043 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:22.043 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:22.043 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:22.043 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:22.043 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:22.043 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:22.044 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:22.302 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:22.560 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:22.560 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:22.560 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:22.560 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:22.560 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:22.560 13:08:34 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:22.560 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:22.560 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:22.560 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:22.561 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:22.819 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:22.819 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:23.078 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:23.337 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:23.596 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:23.596 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:23.596 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.596 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.596 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:23.596 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.596 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.596 13:08:35 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:23.855 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:23.855 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:23.855 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:23.855 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.855 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.855 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:23.855 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:23.855 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:23.855 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:24.129 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.129 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.129 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:24.129 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:24.129 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:24.129 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.129 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.129 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:24.387 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.387 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.387 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:24.387 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.387 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.387 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:24.387 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:24.387 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:24.645 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.645 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.645 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:24.645 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.645 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.645 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:24.645 13:08:36 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:24.645 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:24.645 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:24.903 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.903 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.903 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:24.903 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:24.903 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:24.903 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:24.903 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.161 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:25.419 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:25.678 13:08:37 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:25.678 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:25.678 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.678 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:25.678 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.678 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:24:25.678 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:25.678 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.678 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.678 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:24:25.936 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:25.936 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.936 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.936 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:24:25.936 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.936 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.937 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:24:25.937 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.937 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.937 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:24:25.937 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:25.937 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:25.937 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:24:26.195 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:26.195 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:24:26.195 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.195 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.195 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:24:26.453 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:24:26.453 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.453 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.453 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:24:26.453 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:24:26.453 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:24:26.719 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.719 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.719 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:26.719 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.719 13:08:38 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.719 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:24:26.719 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.719 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.977 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:24:26.977 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.977 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.977 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.977 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:26.977 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:26.977 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.235 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.235 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.235 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # nvmfcleanup 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.236 rmmod nvme_tcp 00:24:27.236 rmmod nvme_fabrics 00:24:27.236 rmmod nvme_keyring 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # '[' -n 100233 ']' 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # killprocess 100233 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 100233 ']' 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 100233 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100233 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:27.236 killing process with pid 100233 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100233' 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 100233 00:24:27.236 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 100233 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@282 -- # remove_spdk_ns 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:24:27.494 00:24:27.494 real 0m48.986s 00:24:27.494 user 3m43.633s 00:24:27.494 sys 0m27.850s 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:24:27.494 ************************************ 00:24:27.494 END TEST nvmf_ns_hotplug_stress 00:24:27.494 ************************************ 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:27.494 ************************************ 00:24:27.494 START TEST nvmf_connect_stress 00:24:27.494 ************************************ 00:24:27.494 13:08:39 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:24:27.753 * Looking for test storage... 00:24:27.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@452 -- # prepare_net_devs 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@414 -- # local -g is_hw=no 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@416 -- # remove_spdk_ns 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.753 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@436 -- # nvmf_veth_init 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:24:27.754 Cannot find device "nvmf_tgt_br" 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:24:27.754 Cannot find device "nvmf_tgt_br2" 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@160 -- # true 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:24:27.754 Cannot find device "nvmf_tgt_br" 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:24:27.754 Cannot find device "nvmf_tgt_br2" 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:27.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:27.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:27.754 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:24:28.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:24:28.012 00:24:28.012 --- 10.0.0.2 ping statistics --- 00:24:28.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.012 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:24:28.012 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.012 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:24:28.012 00:24:28.012 --- 10.0.0.3 ping statistics --- 00:24:28.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.012 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:28.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:24:28.012 00:24:28.012 --- 10.0.0.1 ping statistics --- 00:24:28.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.012 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@437 -- # return 0 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.012 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@485 -- # nvmfpid=102641 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@486 -- # waitforlisten 102641 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 102641 ']' 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.013 13:08:40 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:28.013 [2024-07-15 13:08:40.473741] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:28.013 [2024-07-15 13:08:40.475085] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:24:28.013 [2024-07-15 13:08:40.475858] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.272 [2024-07-15 13:08:40.615701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:28.272 [2024-07-15 13:08:40.688020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.272 [2024-07-15 13:08:40.688085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.272 [2024-07-15 13:08:40.688099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.272 [2024-07-15 13:08:40.688109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.272 [2024-07-15 13:08:40.688118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.272 [2024-07-15 13:08:40.688211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.272 [2024-07-15 13:08:40.689083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.272 [2024-07-15 13:08:40.689110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.532 [2024-07-15 13:08:40.741437] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:28.532 [2024-07-15 13:08:40.741883] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:28.532 [2024-07-15 13:08:40.741964] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:28.532 [2024-07-15 13:08:40.742961] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:29.105 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.105 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:24:29.105 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:24:29.105 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:29.105 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:29.105 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.105 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.105 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.105 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:29.105 [2024-07-15 13:08:41.566045] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:29.370 [2024-07-15 13:08:41.606535] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:29.370 NULL1 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=102693 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.370 13:08:41 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:29.633 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.633 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:29.633 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:29.633 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.633 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:29.897 [2024-07-15 13:08:42.295009] nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=3 for tqpair=0x1400a60 00:24:29.897 [2024-07-15 13:08:42.297883] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:29.897 [2024-07-15 13:08:42.298032] nvme_ctrlr.c:1251:nvme_ctrlr_shutdown_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Failed to read the CSTS register 00:24:29.897 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.897 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:29.897 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:29.897 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.897 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:30.162 [2024-07-15 13:08:42.611921] nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=3 for tqpair=0x1400a60 00:24:30.162 [2024-07-15 13:08:42.614329] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:30.162 [2024-07-15 13:08:42.614401] nvme_ctrlr.c:1251:nvme_ctrlr_shutdown_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Failed to read the CSTS register 00:24:30.422 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.422 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:30.422 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:30.422 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.422 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:30.679 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.679 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:30.679 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:30.679 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.679 13:08:42 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:30.939 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.939 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:30.939 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:30.939 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.939 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:31.195 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.195 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:31.195 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:31.195 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.195 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:31.759 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.759 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:31.759 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:31.759 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.759 13:08:43 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:32.016 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.016 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:32.016 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:32.016 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.016 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:32.273 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.273 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:32.273 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:32.273 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.273 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:32.529 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.529 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:32.529 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:32.529 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.529 13:08:44 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:32.787 [2024-07-15 13:08:45.112830] nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=3 for tqpair=0x1400a60 00:24:32.787 [2024-07-15 13:08:45.115262] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:32.787 [2024-07-15 13:08:45.115364] nvme_ctrlr.c:1251:nvme_ctrlr_shutdown_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Failed to read the CSTS register 00:24:32.787 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.787 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:32.787 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:32.787 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.787 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:33.352 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.352 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:33.352 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:33.352 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.352 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:33.610 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.610 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:33.610 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:33.610 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.610 13:08:45 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:33.875 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.875 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:33.875 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:33.875 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.875 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:34.133 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.133 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:34.133 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:34.133 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.133 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:34.392 [2024-07-15 13:08:46.686350] nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=3 for tqpair=0x1400a60 00:24:34.392 [2024-07-15 13:08:46.688793] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:34.392 [2024-07-15 13:08:46.688876] nvme_ctrlr.c:1251:nvme_ctrlr_shutdown_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Failed to read the CSTS register 00:24:34.392 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.392 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:34.392 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:34.392 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.392 13:08:46 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:34.959 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.959 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:34.959 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:34.959 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.959 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:34.959 [2024-07-15 13:08:47.275871] nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=3 for tqpair=0x1400a60 00:24:34.959 [2024-07-15 13:08:47.278283] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:34.959 [2024-07-15 13:08:47.278339] nvme_ctrlr.c:1251:nvme_ctrlr_shutdown_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Failed to read the CSTS register 00:24:35.218 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.218 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:35.218 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:35.218 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.218 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:35.477 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.477 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:35.477 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:35.477 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.477 13:08:47 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:35.735 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.735 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:35.735 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:35.735 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.735 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:35.993 [2024-07-15 13:08:48.281844] nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=3 for tqpair=0x1400a60 00:24:35.993 [2024-07-15 13:08:48.284177] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:35.993 [2024-07-15 13:08:48.284239] nvme_ctrlr.c:1251:nvme_ctrlr_shutdown_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Failed to read the CSTS register 00:24:35.993 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.993 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:35.993 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:35.993 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.993 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:36.562 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.562 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:36.562 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:36.562 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.562 13:08:48 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:36.850 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.850 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:36.850 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:36.850 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.850 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:37.108 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.109 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:37.109 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:37.109 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.109 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:37.367 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.367 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:37.367 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:37.367 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.367 13:08:49 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:37.625 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.625 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:37.625 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:37.625 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.625 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:38.191 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.191 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:38.191 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:38.191 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.191 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:38.191 [2024-07-15 13:08:50.467045] nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=3 for tqpair=0x1400a60 00:24:38.191 [2024-07-15 13:08:50.469357] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:38.191 [2024-07-15 13:08:50.469424] nvme_ctrlr.c:1251:nvme_ctrlr_shutdown_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Failed to read the CSTS register 00:24:38.450 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.450 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:38.450 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:38.450 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.450 13:08:50 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:38.709 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.709 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:38.709 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:38.709 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.709 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:38.709 [2024-07-15 13:08:51.145264] nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=3 for tqpair=0x1400a60 00:24:38.709 [2024-07-15 13:08:51.148913] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:38.709 [2024-07-15 13:08:51.148984] nvme_ctrlr.c:1251:nvme_ctrlr_shutdown_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Failed to read the CSTS register 00:24:38.968 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.968 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:38.968 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:38.968 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.968 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:39.539 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.539 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:39.539 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:24:39.539 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.539 13:08:51 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:39.539 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 102693 00:24:39.798 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (102693) - No such process 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 102693 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@492 -- # nvmfcleanup 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.798 rmmod nvme_tcp 00:24:39.798 rmmod nvme_fabrics 00:24:39.798 rmmod nvme_keyring 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@493 -- # '[' -n 102641 ']' 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@494 -- # killprocess 102641 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 102641 ']' 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 102641 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102641 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:39.798 killing process with pid 102641 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102641' 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 102641 00:24:39.798 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 102641 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@282 -- # remove_spdk_ns 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:24:40.056 00:24:40.056 real 0m12.536s 00:24:40.056 user 0m20.307s 00:24:40.056 sys 0m7.088s 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.056 13:08:52 nvmf_tcp_interrupt_mode.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:24:40.056 ************************************ 00:24:40.056 END TEST nvmf_connect_stress 00:24:40.056 ************************************ 00:24:40.315 13:08:52 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:24:40.315 13:08:52 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:24:40.315 13:08:52 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:40.315 13:08:52 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.315 13:08:52 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:40.315 ************************************ 00:24:40.315 START TEST nvmf_fused_ordering 00:24:40.315 ************************************ 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:24:40.316 * Looking for test storage... 00:24:40.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@452 -- # prepare_net_devs 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@414 -- # local -g is_hw=no 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@416 -- # remove_spdk_ns 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@436 -- # nvmf_veth_init 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:24:40.316 Cannot find device "nvmf_tgt_br" 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:24:40.316 Cannot find device "nvmf_tgt_br2" 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@160 -- # true 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:24:40.316 Cannot find device "nvmf_tgt_br" 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:24:40.316 Cannot find device "nvmf_tgt_br2" 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:24:40.316 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:40.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:40.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:40.576 13:08:52 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:40.576 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:40.576 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:24:40.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:24:40.576 00:24:40.576 --- 10.0.0.2 ping statistics --- 00:24:40.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.576 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:40.576 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:24:40.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:40.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:24:40.576 00:24:40.576 --- 10.0.0.3 ping statistics --- 00:24:40.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.576 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:24:40.576 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:40.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:24:40.576 00:24:40.576 --- 10.0.0.1 ping statistics --- 00:24:40.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.576 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@437 -- # return 0 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@485 -- # nvmfpid=103009 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@486 -- # waitforlisten 103009 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 103009 ']' 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.835 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:40.835 [2024-07-15 13:08:53.196663] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:40.835 [2024-07-15 13:08:53.198368] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:24:40.835 [2024-07-15 13:08:53.198458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.095 [2024-07-15 13:08:53.346448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.095 [2024-07-15 13:08:53.435113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.095 [2024-07-15 13:08:53.435196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.095 [2024-07-15 13:08:53.435218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.095 [2024-07-15 13:08:53.435233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.095 [2024-07-15 13:08:53.435245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.095 [2024-07-15 13:08:53.435296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.095 [2024-07-15 13:08:53.500100] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:41.095 [2024-07-15 13:08:53.500505] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:41.095 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.095 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:24:41.095 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:24:41.095 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:41.095 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:41.352 [2024-07-15 13:08:53.647302] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:41.352 [2024-07-15 13:08:53.672270] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:41.352 NULL1 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.352 13:08:53 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:41.352 [2024-07-15 13:08:53.742417] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:24:41.352 [2024-07-15 13:08:53.742497] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103044 ] 00:24:41.918 Attached to nqn.2016-06.io.spdk:cnode1 00:24:41.918 Namespace ID: 1 size: 1GB 00:24:41.918 fused_ordering(0) 00:24:41.918 fused_ordering(1) 00:24:41.918 fused_ordering(2) 00:24:41.918 fused_ordering(3) 00:24:41.918 fused_ordering(4) 00:24:41.918 fused_ordering(5) 00:24:41.918 fused_ordering(6) 00:24:41.918 fused_ordering(7) 00:24:41.918 fused_ordering(8) 00:24:41.918 fused_ordering(9) 00:24:41.918 fused_ordering(10) 00:24:41.918 fused_ordering(11) 00:24:41.918 fused_ordering(12) 00:24:41.918 fused_ordering(13) 00:24:41.918 fused_ordering(14) 00:24:41.918 fused_ordering(15) 00:24:41.918 fused_ordering(16) 00:24:41.918 fused_ordering(17) 00:24:41.918 fused_ordering(18) 00:24:41.918 fused_ordering(19) 00:24:41.918 fused_ordering(20) 00:24:41.918 fused_ordering(21) 00:24:41.918 fused_ordering(22) 00:24:41.918 fused_ordering(23) 00:24:41.918 fused_ordering(24) 00:24:41.918 fused_ordering(25) 00:24:41.918 fused_ordering(26) 00:24:41.918 fused_ordering(27) 00:24:41.918 fused_ordering(28) 00:24:41.918 fused_ordering(29) 00:24:41.918 fused_ordering(30) 00:24:41.918 fused_ordering(31) 00:24:41.918 fused_ordering(32) 00:24:41.919 fused_ordering(33) 00:24:41.919 fused_ordering(34) 00:24:41.919 fused_ordering(35) 00:24:41.919 fused_ordering(36) 00:24:41.919 fused_ordering(37) 00:24:41.919 fused_ordering(38) 00:24:41.919 fused_ordering(39) 00:24:41.919 fused_ordering(40) 00:24:41.919 fused_ordering(41) 00:24:41.919 fused_ordering(42) 00:24:41.919 fused_ordering(43) 00:24:41.919 fused_ordering(44) 00:24:41.919 fused_ordering(45) 00:24:41.919 fused_ordering(46) 00:24:41.919 fused_ordering(47) 00:24:41.919 fused_ordering(48) 00:24:41.919 fused_ordering(49) 00:24:41.919 fused_ordering(50) 00:24:41.919 fused_ordering(51) 00:24:41.919 fused_ordering(52) 00:24:41.919 fused_ordering(53) 00:24:41.919 fused_ordering(54) 00:24:41.919 fused_ordering(55) 00:24:41.919 fused_ordering(56) 00:24:41.919 fused_ordering(57) 00:24:41.919 fused_ordering(58) 00:24:41.919 fused_ordering(59) 00:24:41.919 fused_ordering(60) 00:24:41.919 fused_ordering(61) 00:24:41.919 fused_ordering(62) 00:24:41.919 fused_ordering(63) 00:24:41.919 fused_ordering(64) 00:24:41.919 fused_ordering(65) 00:24:41.919 fused_ordering(66) 00:24:41.919 fused_ordering(67) 00:24:41.919 fused_ordering(68) 00:24:41.919 fused_ordering(69) 00:24:41.919 fused_ordering(70) 00:24:41.919 fused_ordering(71) 00:24:41.919 fused_ordering(72) 00:24:41.919 fused_ordering(73) 00:24:41.919 fused_ordering(74) 00:24:41.919 fused_ordering(75) 00:24:41.919 fused_ordering(76) 00:24:41.919 fused_ordering(77) 00:24:41.919 fused_ordering(78) 00:24:41.919 fused_ordering(79) 00:24:41.919 fused_ordering(80) 00:24:41.919 fused_ordering(81) 00:24:41.919 fused_ordering(82) 00:24:41.919 fused_ordering(83) 00:24:41.919 fused_ordering(84) 00:24:41.919 fused_ordering(85) 00:24:41.919 fused_ordering(86) 00:24:41.919 fused_ordering(87) 00:24:41.919 fused_ordering(88) 00:24:41.919 fused_ordering(89) 00:24:41.919 fused_ordering(90) 00:24:41.919 fused_ordering(91) 00:24:41.919 fused_ordering(92) 00:24:41.919 fused_ordering(93) 00:24:41.919 fused_ordering(94) 00:24:41.919 fused_ordering(95) 00:24:41.919 fused_ordering(96) 00:24:41.919 fused_ordering(97) 00:24:41.919 fused_ordering(98) 00:24:41.919 fused_ordering(99) 00:24:41.919 fused_ordering(100) 00:24:41.919 fused_ordering(101) 00:24:41.919 fused_ordering(102) 00:24:41.919 fused_ordering(103) 00:24:41.919 fused_ordering(104) 00:24:41.919 fused_ordering(105) 00:24:41.919 fused_ordering(106) 00:24:41.919 fused_ordering(107) 00:24:41.919 fused_ordering(108) 00:24:41.919 fused_ordering(109) 00:24:41.919 fused_ordering(110) 00:24:41.919 fused_ordering(111) 00:24:41.919 fused_ordering(112) 00:24:41.919 fused_ordering(113) 00:24:41.919 fused_ordering(114) 00:24:41.919 fused_ordering(115) 00:24:41.919 fused_ordering(116) 00:24:41.919 fused_ordering(117) 00:24:41.919 fused_ordering(118) 00:24:41.919 fused_ordering(119) 00:24:41.919 fused_ordering(120) 00:24:41.919 fused_ordering(121) 00:24:41.919 fused_ordering(122) 00:24:41.919 fused_ordering(123) 00:24:41.919 fused_ordering(124) 00:24:41.919 fused_ordering(125) 00:24:41.919 fused_ordering(126) 00:24:41.919 fused_ordering(127) 00:24:41.919 fused_ordering(128) 00:24:41.919 fused_ordering(129) 00:24:41.919 fused_ordering(130) 00:24:41.919 fused_ordering(131) 00:24:41.919 fused_ordering(132) 00:24:41.919 fused_ordering(133) 00:24:41.919 fused_ordering(134) 00:24:41.919 fused_ordering(135) 00:24:41.919 fused_ordering(136) 00:24:41.919 fused_ordering(137) 00:24:41.919 fused_ordering(138) 00:24:41.919 fused_ordering(139) 00:24:41.919 fused_ordering(140) 00:24:41.919 fused_ordering(141) 00:24:41.919 fused_ordering(142) 00:24:41.919 fused_ordering(143) 00:24:41.919 fused_ordering(144) 00:24:41.919 fused_ordering(145) 00:24:41.919 fused_ordering(146) 00:24:41.919 fused_ordering(147) 00:24:41.919 fused_ordering(148) 00:24:41.919 fused_ordering(149) 00:24:41.919 fused_ordering(150) 00:24:41.919 fused_ordering(151) 00:24:41.919 fused_ordering(152) 00:24:41.919 fused_ordering(153) 00:24:41.919 fused_ordering(154) 00:24:41.919 fused_ordering(155) 00:24:41.919 fused_ordering(156) 00:24:41.919 fused_ordering(157) 00:24:41.919 fused_ordering(158) 00:24:41.919 fused_ordering(159) 00:24:41.919 fused_ordering(160) 00:24:41.919 fused_ordering(161) 00:24:41.919 fused_ordering(162) 00:24:41.919 fused_ordering(163) 00:24:41.919 fused_ordering(164) 00:24:41.919 fused_ordering(165) 00:24:41.919 fused_ordering(166) 00:24:41.919 fused_ordering(167) 00:24:41.919 fused_ordering(168) 00:24:41.919 fused_ordering(169) 00:24:41.919 fused_ordering(170) 00:24:41.919 fused_ordering(171) 00:24:41.919 fused_ordering(172) 00:24:41.919 fused_ordering(173) 00:24:41.919 fused_ordering(174) 00:24:41.919 fused_ordering(175) 00:24:41.919 fused_ordering(176) 00:24:41.919 fused_ordering(177) 00:24:41.919 fused_ordering(178) 00:24:41.919 fused_ordering(179) 00:24:41.919 fused_ordering(180) 00:24:41.919 fused_ordering(181) 00:24:41.919 fused_ordering(182) 00:24:41.919 fused_ordering(183) 00:24:41.919 fused_ordering(184) 00:24:41.919 fused_ordering(185) 00:24:41.919 fused_ordering(186) 00:24:41.919 fused_ordering(187) 00:24:41.919 fused_ordering(188) 00:24:41.919 fused_ordering(189) 00:24:41.919 fused_ordering(190) 00:24:41.919 fused_ordering(191) 00:24:41.919 fused_ordering(192) 00:24:41.919 fused_ordering(193) 00:24:41.919 fused_ordering(194) 00:24:41.919 fused_ordering(195) 00:24:41.919 fused_ordering(196) 00:24:41.919 fused_ordering(197) 00:24:41.919 fused_ordering(198) 00:24:41.919 fused_ordering(199) 00:24:41.919 fused_ordering(200) 00:24:41.919 fused_ordering(201) 00:24:41.919 fused_ordering(202) 00:24:41.919 fused_ordering(203) 00:24:41.919 fused_ordering(204) 00:24:41.919 fused_ordering(205) 00:24:42.497 fused_ordering(206) 00:24:42.497 fused_ordering(207) 00:24:42.497 fused_ordering(208) 00:24:42.497 fused_ordering(209) 00:24:42.497 fused_ordering(210) 00:24:42.497 fused_ordering(211) 00:24:42.497 fused_ordering(212) 00:24:42.497 fused_ordering(213) 00:24:42.497 fused_ordering(214) 00:24:42.497 fused_ordering(215) 00:24:42.497 fused_ordering(216) 00:24:42.497 fused_ordering(217) 00:24:42.497 fused_ordering(218) 00:24:42.497 fused_ordering(219) 00:24:42.497 fused_ordering(220) 00:24:42.497 fused_ordering(221) 00:24:42.497 fused_ordering(222) 00:24:42.497 fused_ordering(223) 00:24:42.497 fused_ordering(224) 00:24:42.497 fused_ordering(225) 00:24:42.497 fused_ordering(226) 00:24:42.497 fused_ordering(227) 00:24:42.497 fused_ordering(228) 00:24:42.497 fused_ordering(229) 00:24:42.497 fused_ordering(230) 00:24:42.497 fused_ordering(231) 00:24:42.497 fused_ordering(232) 00:24:42.497 fused_ordering(233) 00:24:42.497 fused_ordering(234) 00:24:42.497 fused_ordering(235) 00:24:42.497 fused_ordering(236) 00:24:42.497 fused_ordering(237) 00:24:42.497 fused_ordering(238) 00:24:42.497 fused_ordering(239) 00:24:42.497 fused_ordering(240) 00:24:42.497 fused_ordering(241) 00:24:42.497 fused_ordering(242) 00:24:42.497 fused_ordering(243) 00:24:42.497 fused_ordering(244) 00:24:42.497 fused_ordering(245) 00:24:42.497 fused_ordering(246) 00:24:42.497 fused_ordering(247) 00:24:42.497 fused_ordering(248) 00:24:42.497 fused_ordering(249) 00:24:42.497 fused_ordering(250) 00:24:42.497 fused_ordering(251) 00:24:42.497 fused_ordering(252) 00:24:42.497 fused_ordering(253) 00:24:42.497 fused_ordering(254) 00:24:42.497 fused_ordering(255) 00:24:42.497 fused_ordering(256) 00:24:42.497 fused_ordering(257) 00:24:42.497 fused_ordering(258) 00:24:42.497 fused_ordering(259) 00:24:42.497 fused_ordering(260) 00:24:42.497 fused_ordering(261) 00:24:42.497 fused_ordering(262) 00:24:42.497 fused_ordering(263) 00:24:42.497 fused_ordering(264) 00:24:42.497 fused_ordering(265) 00:24:42.497 fused_ordering(266) 00:24:42.497 fused_ordering(267) 00:24:42.497 fused_ordering(268) 00:24:42.497 fused_ordering(269) 00:24:42.497 fused_ordering(270) 00:24:42.497 fused_ordering(271) 00:24:42.497 fused_ordering(272) 00:24:42.497 fused_ordering(273) 00:24:42.497 fused_ordering(274) 00:24:42.497 fused_ordering(275) 00:24:42.497 fused_ordering(276) 00:24:42.497 fused_ordering(277) 00:24:42.497 fused_ordering(278) 00:24:42.497 fused_ordering(279) 00:24:42.497 fused_ordering(280) 00:24:42.497 fused_ordering(281) 00:24:42.497 fused_ordering(282) 00:24:42.497 fused_ordering(283) 00:24:42.497 fused_ordering(284) 00:24:42.497 fused_ordering(285) 00:24:42.497 fused_ordering(286) 00:24:42.497 fused_ordering(287) 00:24:42.497 fused_ordering(288) 00:24:42.497 fused_ordering(289) 00:24:42.497 fused_ordering(290) 00:24:42.497 fused_ordering(291) 00:24:42.497 fused_ordering(292) 00:24:42.497 fused_ordering(293) 00:24:42.497 fused_ordering(294) 00:24:42.497 fused_ordering(295) 00:24:42.497 fused_ordering(296) 00:24:42.497 fused_ordering(297) 00:24:42.497 fused_ordering(298) 00:24:42.497 fused_ordering(299) 00:24:42.497 fused_ordering(300) 00:24:42.497 fused_ordering(301) 00:24:42.497 fused_ordering(302) 00:24:42.497 fused_ordering(303) 00:24:42.497 fused_ordering(304) 00:24:42.497 fused_ordering(305) 00:24:42.498 fused_ordering(306) 00:24:42.498 fused_ordering(307) 00:24:42.498 fused_ordering(308) 00:24:42.498 fused_ordering(309) 00:24:42.498 fused_ordering(310) 00:24:42.498 fused_ordering(311) 00:24:42.498 fused_ordering(312) 00:24:42.498 fused_ordering(313) 00:24:42.498 fused_ordering(314) 00:24:42.498 fused_ordering(315) 00:24:42.498 fused_ordering(316) 00:24:42.498 fused_ordering(317) 00:24:42.498 fused_ordering(318) 00:24:42.498 fused_ordering(319) 00:24:42.498 fused_ordering(320) 00:24:42.498 fused_ordering(321) 00:24:42.498 fused_ordering(322) 00:24:42.498 fused_ordering(323) 00:24:42.498 fused_ordering(324) 00:24:42.498 fused_ordering(325) 00:24:42.498 fused_ordering(326) 00:24:42.498 fused_ordering(327) 00:24:42.498 fused_ordering(328) 00:24:42.498 fused_ordering(329) 00:24:42.498 fused_ordering(330) 00:24:42.498 fused_ordering(331) 00:24:42.498 fused_ordering(332) 00:24:42.498 fused_ordering(333) 00:24:42.498 fused_ordering(334) 00:24:42.498 fused_ordering(335) 00:24:42.498 fused_ordering(336) 00:24:42.498 fused_ordering(337) 00:24:42.498 fused_ordering(338) 00:24:42.498 fused_ordering(339) 00:24:42.498 fused_ordering(340) 00:24:42.498 fused_ordering(341) 00:24:42.498 fused_ordering(342) 00:24:42.498 fused_ordering(343) 00:24:42.498 fused_ordering(344) 00:24:42.498 fused_ordering(345) 00:24:42.498 fused_ordering(346) 00:24:42.498 fused_ordering(347) 00:24:42.498 fused_ordering(348) 00:24:42.498 fused_ordering(349) 00:24:42.498 fused_ordering(350) 00:24:42.498 fused_ordering(351) 00:24:42.498 fused_ordering(352) 00:24:42.498 fused_ordering(353) 00:24:42.498 fused_ordering(354) 00:24:42.498 fused_ordering(355) 00:24:42.498 fused_ordering(356) 00:24:42.498 fused_ordering(357) 00:24:42.498 fused_ordering(358) 00:24:42.498 fused_ordering(359) 00:24:42.498 fused_ordering(360) 00:24:42.498 fused_ordering(361) 00:24:42.498 fused_ordering(362) 00:24:42.498 fused_ordering(363) 00:24:42.498 fused_ordering(364) 00:24:42.498 fused_ordering(365) 00:24:42.498 fused_ordering(366) 00:24:42.498 fused_ordering(367) 00:24:42.498 fused_ordering(368) 00:24:42.498 fused_ordering(369) 00:24:42.498 fused_ordering(370) 00:24:42.498 fused_ordering(371) 00:24:42.498 fused_ordering(372) 00:24:42.498 fused_ordering(373) 00:24:42.498 fused_ordering(374) 00:24:42.498 fused_ordering(375) 00:24:42.498 fused_ordering(376) 00:24:42.498 fused_ordering(377) 00:24:42.498 fused_ordering(378) 00:24:42.498 fused_ordering(379) 00:24:42.498 fused_ordering(380) 00:24:42.498 fused_ordering(381) 00:24:42.498 fused_ordering(382) 00:24:42.498 fused_ordering(383) 00:24:42.498 fused_ordering(384) 00:24:42.498 fused_ordering(385) 00:24:42.498 fused_ordering(386) 00:24:42.498 fused_ordering(387) 00:24:42.498 fused_ordering(388) 00:24:42.498 fused_ordering(389) 00:24:42.498 fused_ordering(390) 00:24:42.498 fused_ordering(391) 00:24:42.498 fused_ordering(392) 00:24:42.498 fused_ordering(393) 00:24:42.498 fused_ordering(394) 00:24:42.498 fused_ordering(395) 00:24:42.498 fused_ordering(396) 00:24:42.498 fused_ordering(397) 00:24:42.498 fused_ordering(398) 00:24:42.498 fused_ordering(399) 00:24:42.498 fused_ordering(400) 00:24:42.498 fused_ordering(401) 00:24:42.498 fused_ordering(402) 00:24:42.498 fused_ordering(403) 00:24:42.498 fused_ordering(404) 00:24:42.498 fused_ordering(405) 00:24:42.498 fused_ordering(406) 00:24:42.498 fused_ordering(407) 00:24:42.498 fused_ordering(408) 00:24:42.498 fused_ordering(409) 00:24:42.498 fused_ordering(410) 00:24:43.131 fused_ordering(411) 00:24:43.131 fused_ordering(412) 00:24:43.131 fused_ordering(413) 00:24:43.131 fused_ordering(414) 00:24:43.131 fused_ordering(415) 00:24:43.131 fused_ordering(416) 00:24:43.131 fused_ordering(417) 00:24:43.131 fused_ordering(418) 00:24:43.131 fused_ordering(419) 00:24:43.131 fused_ordering(420) 00:24:43.131 fused_ordering(421) 00:24:43.131 fused_ordering(422) 00:24:43.131 fused_ordering(423) 00:24:43.131 fused_ordering(424) 00:24:43.131 fused_ordering(425) 00:24:43.131 fused_ordering(426) 00:24:43.131 fused_ordering(427) 00:24:43.131 fused_ordering(428) 00:24:43.131 fused_ordering(429) 00:24:43.131 fused_ordering(430) 00:24:43.131 fused_ordering(431) 00:24:43.131 fused_ordering(432) 00:24:43.131 fused_ordering(433) 00:24:43.131 fused_ordering(434) 00:24:43.131 fused_ordering(435) 00:24:43.131 fused_ordering(436) 00:24:43.131 fused_ordering(437) 00:24:43.131 fused_ordering(438) 00:24:43.131 fused_ordering(439) 00:24:43.131 fused_ordering(440) 00:24:43.131 fused_ordering(441) 00:24:43.131 fused_ordering(442) 00:24:43.131 fused_ordering(443) 00:24:43.131 fused_ordering(444) 00:24:43.131 fused_ordering(445) 00:24:43.131 fused_ordering(446) 00:24:43.131 fused_ordering(447) 00:24:43.131 fused_ordering(448) 00:24:43.131 fused_ordering(449) 00:24:43.131 fused_ordering(450) 00:24:43.131 fused_ordering(451) 00:24:43.131 fused_ordering(452) 00:24:43.131 fused_ordering(453) 00:24:43.131 fused_ordering(454) 00:24:43.131 fused_ordering(455) 00:24:43.131 fused_ordering(456) 00:24:43.131 fused_ordering(457) 00:24:43.131 fused_ordering(458) 00:24:43.131 fused_ordering(459) 00:24:43.131 fused_ordering(460) 00:24:43.131 fused_ordering(461) 00:24:43.131 fused_ordering(462) 00:24:43.131 fused_ordering(463) 00:24:43.131 fused_ordering(464) 00:24:43.131 fused_ordering(465) 00:24:43.131 fused_ordering(466) 00:24:43.131 fused_ordering(467) 00:24:43.131 fused_ordering(468) 00:24:43.131 fused_ordering(469) 00:24:43.131 fused_ordering(470) 00:24:43.131 fused_ordering(471) 00:24:43.131 fused_ordering(472) 00:24:43.131 fused_ordering(473) 00:24:43.131 fused_ordering(474) 00:24:43.131 fused_ordering(475) 00:24:43.131 fused_ordering(476) 00:24:43.131 fused_ordering(477) 00:24:43.131 fused_ordering(478) 00:24:43.131 fused_ordering(479) 00:24:43.131 fused_ordering(480) 00:24:43.131 fused_ordering(481) 00:24:43.131 fused_ordering(482) 00:24:43.131 fused_ordering(483) 00:24:43.131 fused_ordering(484) 00:24:43.131 fused_ordering(485) 00:24:43.131 fused_ordering(486) 00:24:43.131 fused_ordering(487) 00:24:43.131 fused_ordering(488) 00:24:43.131 fused_ordering(489) 00:24:43.131 fused_ordering(490) 00:24:43.131 fused_ordering(491) 00:24:43.131 fused_ordering(492) 00:24:43.131 fused_ordering(493) 00:24:43.131 fused_ordering(494) 00:24:43.131 fused_ordering(495) 00:24:43.131 fused_ordering(496) 00:24:43.131 fused_ordering(497) 00:24:43.131 fused_ordering(498) 00:24:43.131 fused_ordering(499) 00:24:43.131 fused_ordering(500) 00:24:43.131 fused_ordering(501) 00:24:43.131 fused_ordering(502) 00:24:43.131 fused_ordering(503) 00:24:43.131 fused_ordering(504) 00:24:43.131 fused_ordering(505) 00:24:43.131 fused_ordering(506) 00:24:43.131 fused_ordering(507) 00:24:43.131 fused_ordering(508) 00:24:43.131 fused_ordering(509) 00:24:43.131 fused_ordering(510) 00:24:43.131 fused_ordering(511) 00:24:43.131 fused_ordering(512) 00:24:43.131 fused_ordering(513) 00:24:43.131 fused_ordering(514) 00:24:43.131 fused_ordering(515) 00:24:43.131 fused_ordering(516) 00:24:43.131 fused_ordering(517) 00:24:43.131 fused_ordering(518) 00:24:43.131 fused_ordering(519) 00:24:43.131 fused_ordering(520) 00:24:43.131 fused_ordering(521) 00:24:43.131 fused_ordering(522) 00:24:43.131 fused_ordering(523) 00:24:43.131 fused_ordering(524) 00:24:43.131 fused_ordering(525) 00:24:43.131 fused_ordering(526) 00:24:43.131 fused_ordering(527) 00:24:43.131 fused_ordering(528) 00:24:43.131 fused_ordering(529) 00:24:43.131 fused_ordering(530) 00:24:43.131 fused_ordering(531) 00:24:43.131 fused_ordering(532) 00:24:43.131 fused_ordering(533) 00:24:43.131 fused_ordering(534) 00:24:43.131 fused_ordering(535) 00:24:43.131 fused_ordering(536) 00:24:43.131 fused_ordering(537) 00:24:43.131 fused_ordering(538) 00:24:43.131 fused_ordering(539) 00:24:43.131 fused_ordering(540) 00:24:43.131 fused_ordering(541) 00:24:43.131 fused_ordering(542) 00:24:43.131 fused_ordering(543) 00:24:43.131 fused_ordering(544) 00:24:43.131 fused_ordering(545) 00:24:43.131 fused_ordering(546) 00:24:43.131 fused_ordering(547) 00:24:43.131 fused_ordering(548) 00:24:43.131 fused_ordering(549) 00:24:43.131 fused_ordering(550) 00:24:43.131 fused_ordering(551) 00:24:43.131 fused_ordering(552) 00:24:43.131 fused_ordering(553) 00:24:43.131 fused_ordering(554) 00:24:43.131 fused_ordering(555) 00:24:43.131 fused_ordering(556) 00:24:43.131 fused_ordering(557) 00:24:43.131 fused_ordering(558) 00:24:43.131 fused_ordering(559) 00:24:43.131 fused_ordering(560) 00:24:43.131 fused_ordering(561) 00:24:43.132 fused_ordering(562) 00:24:43.132 fused_ordering(563) 00:24:43.132 fused_ordering(564) 00:24:43.132 fused_ordering(565) 00:24:43.132 fused_ordering(566) 00:24:43.132 fused_ordering(567) 00:24:43.132 fused_ordering(568) 00:24:43.132 fused_ordering(569) 00:24:43.132 fused_ordering(570) 00:24:43.132 fused_ordering(571) 00:24:43.132 fused_ordering(572) 00:24:43.132 fused_ordering(573) 00:24:43.132 fused_ordering(574) 00:24:43.132 fused_ordering(575) 00:24:43.132 fused_ordering(576) 00:24:43.132 fused_ordering(577) 00:24:43.132 fused_ordering(578) 00:24:43.132 fused_ordering(579) 00:24:43.132 fused_ordering(580) 00:24:43.132 fused_ordering(581) 00:24:43.132 fused_ordering(582) 00:24:43.132 fused_ordering(583) 00:24:43.132 fused_ordering(584) 00:24:43.132 fused_ordering(585) 00:24:43.132 fused_ordering(586) 00:24:43.132 fused_ordering(587) 00:24:43.132 fused_ordering(588) 00:24:43.132 fused_ordering(589) 00:24:43.132 fused_ordering(590) 00:24:43.132 fused_ordering(591) 00:24:43.132 fused_ordering(592) 00:24:43.132 fused_ordering(593) 00:24:43.132 fused_ordering(594) 00:24:43.132 fused_ordering(595) 00:24:43.132 fused_ordering(596) 00:24:43.132 fused_ordering(597) 00:24:43.132 fused_ordering(598) 00:24:43.132 fused_ordering(599) 00:24:43.132 fused_ordering(600) 00:24:43.132 fused_ordering(601) 00:24:43.132 fused_ordering(602) 00:24:43.132 fused_ordering(603) 00:24:43.132 fused_ordering(604) 00:24:43.132 fused_ordering(605) 00:24:43.132 fused_ordering(606) 00:24:43.132 fused_ordering(607) 00:24:43.132 fused_ordering(608) 00:24:43.132 fused_ordering(609) 00:24:43.132 fused_ordering(610) 00:24:43.132 fused_ordering(611) 00:24:43.132 fused_ordering(612) 00:24:43.132 fused_ordering(613) 00:24:43.132 fused_ordering(614) 00:24:43.132 fused_ordering(615) 00:24:43.723 fused_ordering(616) 00:24:43.723 fused_ordering(617) 00:24:43.723 fused_ordering(618) 00:24:43.723 fused_ordering(619) 00:24:43.723 fused_ordering(620) 00:24:43.723 fused_ordering(621) 00:24:43.723 fused_ordering(622) 00:24:43.723 fused_ordering(623) 00:24:43.723 fused_ordering(624) 00:24:43.723 fused_ordering(625) 00:24:43.723 fused_ordering(626) 00:24:43.723 fused_ordering(627) 00:24:43.723 fused_ordering(628) 00:24:43.723 fused_ordering(629) 00:24:43.723 fused_ordering(630) 00:24:43.723 fused_ordering(631) 00:24:43.723 fused_ordering(632) 00:24:43.723 fused_ordering(633) 00:24:43.723 fused_ordering(634) 00:24:43.723 fused_ordering(635) 00:24:43.723 fused_ordering(636) 00:24:43.723 fused_ordering(637) 00:24:43.723 fused_ordering(638) 00:24:43.723 fused_ordering(639) 00:24:43.723 fused_ordering(640) 00:24:43.723 fused_ordering(641) 00:24:43.723 fused_ordering(642) 00:24:43.723 fused_ordering(643) 00:24:43.723 fused_ordering(644) 00:24:43.723 fused_ordering(645) 00:24:43.723 fused_ordering(646) 00:24:43.723 fused_ordering(647) 00:24:43.723 fused_ordering(648) 00:24:43.723 fused_ordering(649) 00:24:43.723 fused_ordering(650) 00:24:43.723 fused_ordering(651) 00:24:43.723 fused_ordering(652) 00:24:43.723 fused_ordering(653) 00:24:43.723 fused_ordering(654) 00:24:43.723 fused_ordering(655) 00:24:43.723 fused_ordering(656) 00:24:43.723 fused_ordering(657) 00:24:43.723 fused_ordering(658) 00:24:43.723 fused_ordering(659) 00:24:43.723 fused_ordering(660) 00:24:43.723 fused_ordering(661) 00:24:43.723 fused_ordering(662) 00:24:43.723 fused_ordering(663) 00:24:43.723 fused_ordering(664) 00:24:43.723 fused_ordering(665) 00:24:43.723 fused_ordering(666) 00:24:43.723 fused_ordering(667) 00:24:43.723 fused_ordering(668) 00:24:43.723 fused_ordering(669) 00:24:43.723 fused_ordering(670) 00:24:43.723 fused_ordering(671) 00:24:43.723 fused_ordering(672) 00:24:43.723 fused_ordering(673) 00:24:43.723 fused_ordering(674) 00:24:43.723 fused_ordering(675) 00:24:43.723 fused_ordering(676) 00:24:43.723 fused_ordering(677) 00:24:43.723 fused_ordering(678) 00:24:43.723 fused_ordering(679) 00:24:43.723 fused_ordering(680) 00:24:43.723 fused_ordering(681) 00:24:43.723 fused_ordering(682) 00:24:43.723 fused_ordering(683) 00:24:43.723 fused_ordering(684) 00:24:43.723 fused_ordering(685) 00:24:43.723 fused_ordering(686) 00:24:43.723 fused_ordering(687) 00:24:43.723 fused_ordering(688) 00:24:43.723 fused_ordering(689) 00:24:43.723 fused_ordering(690) 00:24:43.723 fused_ordering(691) 00:24:43.723 fused_ordering(692) 00:24:43.723 fused_ordering(693) 00:24:43.723 fused_ordering(694) 00:24:43.723 fused_ordering(695) 00:24:43.723 fused_ordering(696) 00:24:43.723 fused_ordering(697) 00:24:43.723 fused_ordering(698) 00:24:43.723 fused_ordering(699) 00:24:43.723 fused_ordering(700) 00:24:43.723 fused_ordering(701) 00:24:43.723 fused_ordering(702) 00:24:43.723 fused_ordering(703) 00:24:43.723 fused_ordering(704) 00:24:43.723 fused_ordering(705) 00:24:43.723 fused_ordering(706) 00:24:43.723 fused_ordering(707) 00:24:43.723 fused_ordering(708) 00:24:43.723 fused_ordering(709) 00:24:43.723 fused_ordering(710) 00:24:43.723 fused_ordering(711) 00:24:43.723 fused_ordering(712) 00:24:43.723 fused_ordering(713) 00:24:43.723 fused_ordering(714) 00:24:43.723 fused_ordering(715) 00:24:43.723 fused_ordering(716) 00:24:43.723 fused_ordering(717) 00:24:43.723 fused_ordering(718) 00:24:43.723 fused_ordering(719) 00:24:43.723 fused_ordering(720) 00:24:43.723 fused_ordering(721) 00:24:43.723 fused_ordering(722) 00:24:43.723 fused_ordering(723) 00:24:43.723 fused_ordering(724) 00:24:43.723 fused_ordering(725) 00:24:43.723 fused_ordering(726) 00:24:43.723 fused_ordering(727) 00:24:43.723 fused_ordering(728) 00:24:43.723 fused_ordering(729) 00:24:43.723 fused_ordering(730) 00:24:43.723 fused_ordering(731) 00:24:43.723 fused_ordering(732) 00:24:43.723 fused_ordering(733) 00:24:43.723 fused_ordering(734) 00:24:43.723 fused_ordering(735) 00:24:43.723 fused_ordering(736) 00:24:43.723 fused_ordering(737) 00:24:43.723 fused_ordering(738) 00:24:43.723 fused_ordering(739) 00:24:43.723 fused_ordering(740) 00:24:43.723 fused_ordering(741) 00:24:43.723 fused_ordering(742) 00:24:43.723 fused_ordering(743) 00:24:43.723 fused_ordering(744) 00:24:43.723 fused_ordering(745) 00:24:43.723 fused_ordering(746) 00:24:43.723 fused_ordering(747) 00:24:43.723 fused_ordering(748) 00:24:43.723 fused_ordering(749) 00:24:43.723 fused_ordering(750) 00:24:43.723 fused_ordering(751) 00:24:43.723 fused_ordering(752) 00:24:43.723 fused_ordering(753) 00:24:43.723 fused_ordering(754) 00:24:43.723 fused_ordering(755) 00:24:43.723 fused_ordering(756) 00:24:43.723 fused_ordering(757) 00:24:43.723 fused_ordering(758) 00:24:43.723 fused_ordering(759) 00:24:43.723 fused_ordering(760) 00:24:43.723 fused_ordering(761) 00:24:43.723 fused_ordering(762) 00:24:43.723 fused_ordering(763) 00:24:43.723 fused_ordering(764) 00:24:43.723 fused_ordering(765) 00:24:43.723 fused_ordering(766) 00:24:43.723 fused_ordering(767) 00:24:43.723 fused_ordering(768) 00:24:43.723 fused_ordering(769) 00:24:43.723 fused_ordering(770) 00:24:43.723 fused_ordering(771) 00:24:43.723 fused_ordering(772) 00:24:43.723 fused_ordering(773) 00:24:43.723 fused_ordering(774) 00:24:43.723 fused_ordering(775) 00:24:43.723 fused_ordering(776) 00:24:43.723 fused_ordering(777) 00:24:43.723 fused_ordering(778) 00:24:43.723 fused_ordering(779) 00:24:43.723 fused_ordering(780) 00:24:43.723 fused_ordering(781) 00:24:43.723 fused_ordering(782) 00:24:43.723 fused_ordering(783) 00:24:43.723 fused_ordering(784) 00:24:43.723 fused_ordering(785) 00:24:43.723 fused_ordering(786) 00:24:43.723 fused_ordering(787) 00:24:43.723 fused_ordering(788) 00:24:43.723 fused_ordering(789) 00:24:43.723 fused_ordering(790) 00:24:43.723 fused_ordering(791) 00:24:43.723 fused_ordering(792) 00:24:43.723 fused_ordering(793) 00:24:43.723 fused_ordering(794) 00:24:43.723 fused_ordering(795) 00:24:43.723 fused_ordering(796) 00:24:43.723 fused_ordering(797) 00:24:43.723 fused_ordering(798) 00:24:43.723 fused_ordering(799) 00:24:43.723 fused_ordering(800) 00:24:43.723 fused_ordering(801) 00:24:43.723 fused_ordering(802) 00:24:43.723 fused_ordering(803) 00:24:43.723 fused_ordering(804) 00:24:43.723 fused_ordering(805) 00:24:43.724 fused_ordering(806) 00:24:43.724 fused_ordering(807) 00:24:43.724 fused_ordering(808) 00:24:43.724 fused_ordering(809) 00:24:43.724 fused_ordering(810) 00:24:43.724 fused_ordering(811) 00:24:43.724 fused_ordering(812) 00:24:43.724 fused_ordering(813) 00:24:43.724 fused_ordering(814) 00:24:43.724 fused_ordering(815) 00:24:43.724 fused_ordering(816) 00:24:43.724 fused_ordering(817) 00:24:43.724 fused_ordering(818) 00:24:43.724 fused_ordering(819) 00:24:43.724 fused_ordering(820) 00:24:44.661 fused_ordering(821) 00:24:44.662 fused_ordering(822) 00:24:44.662 fused_ordering(823) 00:24:44.662 fused_ordering(824) 00:24:44.662 fused_ordering(825) 00:24:44.662 fused_ordering(826) 00:24:44.662 fused_ordering(827) 00:24:44.662 fused_ordering(828) 00:24:44.662 fused_ordering(829) 00:24:44.662 fused_ordering(830) 00:24:44.662 fused_ordering(831) 00:24:44.662 fused_ordering(832) 00:24:44.662 fused_ordering(833) 00:24:44.662 fused_ordering(834) 00:24:44.662 fused_ordering(835) 00:24:44.662 fused_ordering(836) 00:24:44.662 fused_ordering(837) 00:24:44.662 fused_ordering(838) 00:24:44.662 fused_ordering(839) 00:24:44.662 fused_ordering(840) 00:24:44.662 fused_ordering(841) 00:24:44.662 fused_ordering(842) 00:24:44.662 fused_ordering(843) 00:24:44.662 fused_ordering(844) 00:24:44.662 fused_ordering(845) 00:24:44.662 fused_ordering(846) 00:24:44.662 fused_ordering(847) 00:24:44.662 fused_ordering(848) 00:24:44.662 fused_ordering(849) 00:24:44.662 fused_ordering(850) 00:24:44.662 fused_ordering(851) 00:24:44.662 fused_ordering(852) 00:24:44.662 fused_ordering(853) 00:24:44.662 fused_ordering(854) 00:24:44.662 fused_ordering(855) 00:24:44.662 fused_ordering(856) 00:24:44.662 fused_ordering(857) 00:24:44.662 fused_ordering(858) 00:24:44.662 fused_ordering(859) 00:24:44.662 fused_ordering(860) 00:24:44.662 fused_ordering(861) 00:24:44.662 fused_ordering(862) 00:24:44.662 fused_ordering(863) 00:24:44.662 fused_ordering(864) 00:24:44.662 fused_ordering(865) 00:24:44.662 fused_ordering(866) 00:24:44.662 fused_ordering(867) 00:24:44.662 fused_ordering(868) 00:24:44.662 fused_ordering(869) 00:24:44.662 fused_ordering(870) 00:24:44.662 fused_ordering(871) 00:24:44.662 fused_ordering(872) 00:24:44.662 fused_ordering(873) 00:24:44.662 fused_ordering(874) 00:24:44.662 fused_ordering(875) 00:24:44.662 fused_ordering(876) 00:24:44.662 fused_ordering(877) 00:24:44.662 fused_ordering(878) 00:24:44.662 fused_ordering(879) 00:24:44.662 fused_ordering(880) 00:24:44.662 fused_ordering(881) 00:24:44.662 fused_ordering(882) 00:24:44.662 fused_ordering(883) 00:24:44.662 fused_ordering(884) 00:24:44.662 fused_ordering(885) 00:24:44.662 fused_ordering(886) 00:24:44.662 fused_ordering(887) 00:24:44.662 fused_ordering(888) 00:24:44.662 fused_ordering(889) 00:24:44.662 fused_ordering(890) 00:24:44.662 fused_ordering(891) 00:24:44.662 fused_ordering(892) 00:24:44.662 fused_ordering(893) 00:24:44.662 fused_ordering(894) 00:24:44.662 fused_ordering(895) 00:24:44.662 fused_ordering(896) 00:24:44.662 fused_ordering(897) 00:24:44.662 fused_ordering(898) 00:24:44.662 fused_ordering(899) 00:24:44.662 fused_ordering(900) 00:24:44.662 fused_ordering(901) 00:24:44.662 fused_ordering(902) 00:24:44.662 fused_ordering(903) 00:24:44.662 fused_ordering(904) 00:24:44.662 fused_ordering(905) 00:24:44.662 fused_ordering(906) 00:24:44.662 fused_ordering(907) 00:24:44.662 fused_ordering(908) 00:24:44.662 fused_ordering(909) 00:24:44.662 fused_ordering(910) 00:24:44.662 fused_ordering(911) 00:24:44.662 fused_ordering(912) 00:24:44.662 fused_ordering(913) 00:24:44.662 fused_ordering(914) 00:24:44.662 fused_ordering(915) 00:24:44.662 fused_ordering(916) 00:24:44.662 fused_ordering(917) 00:24:44.662 fused_ordering(918) 00:24:44.662 fused_ordering(919) 00:24:44.662 fused_ordering(920) 00:24:44.662 fused_ordering(921) 00:24:44.662 fused_ordering(922) 00:24:44.662 fused_ordering(923) 00:24:44.662 fused_ordering(924) 00:24:44.662 fused_ordering(925) 00:24:44.662 fused_ordering(926) 00:24:44.662 fused_ordering(927) 00:24:44.662 fused_ordering(928) 00:24:44.662 fused_ordering(929) 00:24:44.662 fused_ordering(930) 00:24:44.662 fused_ordering(931) 00:24:44.662 fused_ordering(932) 00:24:44.662 fused_ordering(933) 00:24:44.662 fused_ordering(934) 00:24:44.662 fused_ordering(935) 00:24:44.662 fused_ordering(936) 00:24:44.662 fused_ordering(937) 00:24:44.662 fused_ordering(938) 00:24:44.662 fused_ordering(939) 00:24:44.662 fused_ordering(940) 00:24:44.662 fused_ordering(941) 00:24:44.662 fused_ordering(942) 00:24:44.662 fused_ordering(943) 00:24:44.662 fused_ordering(944) 00:24:44.662 fused_ordering(945) 00:24:44.662 fused_ordering(946) 00:24:44.662 fused_ordering(947) 00:24:44.662 fused_ordering(948) 00:24:44.662 fused_ordering(949) 00:24:44.662 fused_ordering(950) 00:24:44.662 fused_ordering(951) 00:24:44.662 fused_ordering(952) 00:24:44.662 fused_ordering(953) 00:24:44.662 fused_ordering(954) 00:24:44.662 fused_ordering(955) 00:24:44.662 fused_ordering(956) 00:24:44.662 fused_ordering(957) 00:24:44.662 fused_ordering(958) 00:24:44.662 fused_ordering(959) 00:24:44.662 fused_ordering(960) 00:24:44.662 fused_ordering(961) 00:24:44.662 fused_ordering(962) 00:24:44.662 fused_ordering(963) 00:24:44.662 fused_ordering(964) 00:24:44.662 fused_ordering(965) 00:24:44.662 fused_ordering(966) 00:24:44.662 fused_ordering(967) 00:24:44.662 fused_ordering(968) 00:24:44.662 fused_ordering(969) 00:24:44.662 fused_ordering(970) 00:24:44.662 fused_ordering(971) 00:24:44.662 fused_ordering(972) 00:24:44.662 fused_ordering(973) 00:24:44.662 fused_ordering(974) 00:24:44.662 fused_ordering(975) 00:24:44.662 fused_ordering(976) 00:24:44.662 fused_ordering(977) 00:24:44.662 fused_ordering(978) 00:24:44.662 fused_ordering(979) 00:24:44.662 fused_ordering(980) 00:24:44.662 fused_ordering(981) 00:24:44.662 fused_ordering(982) 00:24:44.662 fused_ordering(983) 00:24:44.662 fused_ordering(984) 00:24:44.662 fused_ordering(985) 00:24:44.662 fused_ordering(986) 00:24:44.662 fused_ordering(987) 00:24:44.662 fused_ordering(988) 00:24:44.662 fused_ordering(989) 00:24:44.662 fused_ordering(990) 00:24:44.662 fused_ordering(991) 00:24:44.662 fused_ordering(992) 00:24:44.662 fused_ordering(993) 00:24:44.662 fused_ordering(994) 00:24:44.662 fused_ordering(995) 00:24:44.662 fused_ordering(996) 00:24:44.662 fused_ordering(997) 00:24:44.662 fused_ordering(998) 00:24:44.662 fused_ordering(999) 00:24:44.662 fused_ordering(1000) 00:24:44.662 fused_ordering(1001) 00:24:44.662 fused_ordering(1002) 00:24:44.662 fused_ordering(1003) 00:24:44.662 fused_ordering(1004) 00:24:44.662 fused_ordering(1005) 00:24:44.662 fused_ordering(1006) 00:24:44.662 fused_ordering(1007) 00:24:44.662 fused_ordering(1008) 00:24:44.662 fused_ordering(1009) 00:24:44.662 fused_ordering(1010) 00:24:44.662 fused_ordering(1011) 00:24:44.662 fused_ordering(1012) 00:24:44.662 fused_ordering(1013) 00:24:44.662 fused_ordering(1014) 00:24:44.662 fused_ordering(1015) 00:24:44.662 fused_ordering(1016) 00:24:44.662 fused_ordering(1017) 00:24:44.662 fused_ordering(1018) 00:24:44.662 fused_ordering(1019) 00:24:44.662 fused_ordering(1020) 00:24:44.662 fused_ordering(1021) 00:24:44.662 fused_ordering(1022) 00:24:44.662 fused_ordering(1023) 00:24:44.662 13:08:56 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:24:44.662 13:08:56 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:24:44.662 13:08:56 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@492 -- # nvmfcleanup 00:24:44.662 13:08:56 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:24:44.662 rmmod nvme_tcp 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:44.662 rmmod nvme_fabrics 00:24:44.662 rmmod nvme_keyring 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@493 -- # '[' -n 103009 ']' 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@494 -- # killprocess 103009 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 103009 ']' 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 103009 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:24:44.662 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.921 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 103009 00:24:44.921 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:44.921 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:44.921 killing process with pid 103009 00:24:44.921 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 103009' 00:24:44.921 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 103009 00:24:44.921 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 103009 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@282 -- # remove_spdk_ns 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:24:45.179 00:24:45.179 real 0m4.884s 00:24:45.179 user 0m4.955s 00:24:45.179 sys 0m2.431s 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:24:45.179 ************************************ 00:24:45.179 END TEST nvmf_fused_ordering 00:24:45.179 ************************************ 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:45.179 ************************************ 00:24:45.179 START TEST nvmf_delete_subsystem 00:24:45.179 ************************************ 00:24:45.179 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:24:45.179 * Looking for test storage... 00:24:45.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@452 -- # prepare_net_devs 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # local -g is_hw=no 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # remove_spdk_ns 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # nvmf_veth_init 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:24:45.438 Cannot find device "nvmf_tgt_br" 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:24:45.438 Cannot find device "nvmf_tgt_br2" 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # true 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:24:45.438 Cannot find device "nvmf_tgt_br" 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:24:45.438 Cannot find device "nvmf_tgt_br2" 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:45.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:45.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:45.438 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:45.697 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:45.697 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:45.697 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:45.697 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:45.697 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:45.697 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:24:45.697 13:08:57 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:24:45.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:24:45.697 00:24:45.697 --- 10.0.0.2 ping statistics --- 00:24:45.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.697 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:24:45.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:45.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:24:45.697 00:24:45.697 --- 10.0.0.3 ping statistics --- 00:24:45.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.697 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:45.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:24:45.697 00:24:45.697 --- 10.0.0.1 ping statistics --- 00:24:45.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.697 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@437 -- # return 0 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # nvmfpid=103246 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # waitforlisten 103246 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 103246 ']' 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.697 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:45.956 [2024-07-15 13:08:58.255277] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:45.956 [2024-07-15 13:08:58.256650] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:24:45.956 [2024-07-15 13:08:58.256745] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.956 [2024-07-15 13:08:58.412837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:46.215 [2024-07-15 13:08:58.488337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.215 [2024-07-15 13:08:58.488403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.215 [2024-07-15 13:08:58.488415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.215 [2024-07-15 13:08:58.488424] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.215 [2024-07-15 13:08:58.488433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.215 [2024-07-15 13:08:58.488555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.215 [2024-07-15 13:08:58.488570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.215 [2024-07-15 13:08:58.542666] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:46.215 [2024-07-15 13:08:58.543039] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:46.215 [2024-07-15 13:08:58.543109] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:46.215 [2024-07-15 13:08:58.621348] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:46.215 [2024-07-15 13:08:58.643842] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:46.215 NULL1 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:46.215 Delay0 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=103296 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:24:46.215 13:08:58 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:24:46.475 [2024-07-15 13:08:58.841815] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:48.373 13:09:00 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.373 13:09:00 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.373 13:09:00 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 starting I/O failed: -6 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.631 [2024-07-15 13:09:00.885661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8390000c00 is same with the state(5) to be set 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Read completed with error (sct=0, sc=8) 00:24:48.631 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 Write completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 Read completed with error (sct=0, sc=8) 00:24:48.632 starting I/O failed: -6 00:24:48.632 starting I/O failed: -6 00:24:48.632 starting I/O failed: -6 00:24:48.632 starting I/O failed: -6 00:24:48.632 starting I/O failed: -6 00:24:48.632 starting I/O failed: -6 00:24:48.632 starting I/O failed: -6 00:24:48.632 starting I/O failed: -6 00:24:48.632 starting I/O failed: -6 00:24:48.632 starting I/O failed: -6 00:24:49.566 [2024-07-15 13:09:01.862580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5e510 is same with the state(5) to be set 00:24:49.566 Read completed with error (sct=0, sc=8) 00:24:49.566 Read completed with error (sct=0, sc=8) 00:24:49.566 Read completed with error (sct=0, sc=8) 00:24:49.566 Read completed with error (sct=0, sc=8) 00:24:49.566 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 [2024-07-15 13:09:01.886182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d804c0 is same with the state(5) to be set 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 13:09:01 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 [2024-07-15 13:09:01.900253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5e6f0 is same with the state(5) to be set 00:24:49.567 13:09:01 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:24:49.567 13:09:01 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 103296 00:24:49.567 13:09:01 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:24:49.567 Initializing NVMe Controllers 00:24:49.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.567 Controller IO queue size 128, less than required. 00:24:49.567 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:24:49.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:24:49.567 Initialization complete. Launching workers. 00:24:49.567 ======================================================== 00:24:49.567 Latency(us) 00:24:49.567 Device Information : IOPS MiB/s Average min max 00:24:49.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 178.82 0.09 939993.03 728.85 1997308.76 00:24:49.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.47 0.08 906597.50 614.70 1033258.63 00:24:49.567 ======================================================== 00:24:49.567 Total : 345.29 0.17 923892.47 614.70 1997308.76 00:24:49.567 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 [2024-07-15 13:09:01.904890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f839000cfe0 is same with the state(5) to be set 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Read completed with error (sct=0, sc=8) 00:24:49.567 Write completed with error (sct=0, sc=8) 00:24:49.568 Read completed with error (sct=0, sc=8) 00:24:49.568 Write completed with error (sct=0, sc=8) 00:24:49.568 Read completed with error (sct=0, sc=8) 00:24:49.568 Read completed with error (sct=0, sc=8) 00:24:49.568 Read completed with error (sct=0, sc=8) 00:24:49.568 Read completed with error (sct=0, sc=8) 00:24:49.568 Write completed with error (sct=0, sc=8) 00:24:49.568 Read completed with error (sct=0, sc=8) 00:24:49.568 Read completed with error (sct=0, sc=8) 00:24:49.568 [2024-07-15 13:09:01.905572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f839000d600 is same with the state(5) to be set 00:24:49.568 [2024-07-15 13:09:01.906240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5e510 (9): Bad file descriptor 00:24:49.568 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 103296 00:24:50.190 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (103296) - No such process 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 103296 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 103296 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 103296 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:50.190 [2024-07-15 13:09:02.421862] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=103338 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 103338 00:24:50.190 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:50.190 [2024-07-15 13:09:02.614027] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:50.759 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:50.759 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 103338 00:24:50.759 13:09:02 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:51.017 13:09:03 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:51.017 13:09:03 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 103338 00:24:51.017 13:09:03 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:51.581 13:09:03 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:51.581 13:09:03 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 103338 00:24:51.581 13:09:03 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:52.147 13:09:04 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:52.147 13:09:04 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 103338 00:24:52.147 13:09:04 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:52.727 13:09:04 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:52.727 13:09:04 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 103338 00:24:52.727 13:09:04 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:53.293 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:53.293 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 103338 00:24:53.293 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:53.293 Initializing NVMe Controllers 00:24:53.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:53.293 Controller IO queue size 128, less than required. 00:24:53.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:53.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:24:53.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:24:53.293 Initialization complete. Launching workers. 00:24:53.293 ======================================================== 00:24:53.293 Latency(us) 00:24:53.293 Device Information : IOPS MiB/s Average min max 00:24:53.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006668.41 1000342.50 1023983.79 00:24:53.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004504.21 1000212.57 1020433.22 00:24:53.293 ======================================================== 00:24:53.293 Total : 256.00 0.12 1005586.31 1000212.57 1023983.79 00:24:53.293 00:24:53.557 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:53.557 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 103338 00:24:53.557 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (103338) - No such process 00:24:53.557 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 103338 00:24:53.557 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:53.557 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:24:53.557 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # nvmfcleanup 00:24:53.557 13:09:05 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:53.820 rmmod nvme_tcp 00:24:53.820 rmmod nvme_fabrics 00:24:53.820 rmmod nvme_keyring 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # '[' -n 103246 ']' 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # killprocess 103246 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 103246 ']' 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 103246 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 103246 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:53.820 killing process with pid 103246 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 103246' 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 103246 00:24:53.820 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 103246 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@282 -- # remove_spdk_ns 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:24:54.078 ************************************ 00:24:54.078 END TEST nvmf_delete_subsystem 00:24:54.078 ************************************ 00:24:54.078 00:24:54.078 real 0m8.882s 00:24:54.078 user 0m22.672s 00:24:54.078 sys 0m3.351s 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:54.078 ************************************ 00:24:54.078 START TEST nvmf_ns_masking 00:24:54.078 ************************************ 00:24:54.078 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:24:54.078 * Looking for test storage... 00:24:54.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=95b7b6a9-5890-40a9-b675-f34d73ba1f4d 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=796f59d9-012b-4357-882a-85760255632d 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=fc5da725-6c22-4f02-8bbc-61ea931c1431 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@452 -- # prepare_net_devs 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@414 -- # local -g is_hw=no 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@416 -- # remove_spdk_ns 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@436 -- # nvmf_veth_init 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:54.341 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:24:54.342 Cannot find device "nvmf_tgt_br" 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:24:54.342 Cannot find device "nvmf_tgt_br2" 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@160 -- # true 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:24:54.342 Cannot find device "nvmf_tgt_br" 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:24:54.342 Cannot find device "nvmf_tgt_br2" 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:54.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:54.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:54.342 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:24:54.600 13:09:06 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:54.600 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:54.600 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:54.600 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:54.600 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:24:54.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:24:54.600 00:24:54.600 --- 10.0.0.2 ping statistics --- 00:24:54.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.600 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:54.600 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:24:54.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:54.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:24:54.600 00:24:54.600 --- 10.0.0.3 ping statistics --- 00:24:54.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.600 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:54.600 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:54.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:24:54.600 00:24:54.600 --- 10.0.0.1 ping statistics --- 00:24:54.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.600 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:54.601 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.601 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@437 -- # return 0 00:24:54.601 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:24:54.601 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.601 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:24:54.601 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:24:54.601 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.601 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:24:54.601 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@485 -- # nvmfpid=103567 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@486 -- # waitforlisten 103567 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 103567 ']' 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.861 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:24:54.861 [2024-07-15 13:09:07.149591] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:54.861 [2024-07-15 13:09:07.151346] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:24:54.861 [2024-07-15 13:09:07.152013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.861 [2024-07-15 13:09:07.292892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.118 [2024-07-15 13:09:07.374831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.118 [2024-07-15 13:09:07.374906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.118 [2024-07-15 13:09:07.374929] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.118 [2024-07-15 13:09:07.374944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.118 [2024-07-15 13:09:07.374957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.118 [2024-07-15 13:09:07.375000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.118 [2024-07-15 13:09:07.437351] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:55.118 [2024-07-15 13:09:07.437809] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:55.118 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.118 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:24:55.118 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:24:55.118 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:55.118 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:24:55.118 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.118 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:55.683 [2024-07-15 13:09:07.911842] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.683 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:24:55.683 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:24:55.683 13:09:07 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:55.940 Malloc1 00:24:55.940 13:09:08 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:24:56.503 Malloc2 00:24:56.503 13:09:08 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:56.760 13:09:09 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:24:57.417 13:09:09 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.674 [2024-07-15 13:09:09.907970] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.674 13:09:09 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:24:57.674 13:09:09 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fc5da725-6c22-4f02-8bbc-61ea931c1431 -a 10.0.0.2 -s 4420 -i 4 00:24:57.674 13:09:10 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:24:57.674 13:09:10 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:24:57.674 13:09:10 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.674 13:09:10 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:57.674 13:09:10 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:24:59.577 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:59.577 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:59.577 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:59.577 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:59.577 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.577 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:24:59.577 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:24:59.577 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:24:59.835 [ 0]:0x1 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8d86f2b6c3f47f197143b375c83c774 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8d86f2b6c3f47f197143b375c83c774 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:24:59.835 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:25:00.093 [ 0]:0x1 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8d86f2b6c3f47f197143b375c83c774 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8d86f2b6c3f47f197143b375c83c774 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:25:00.093 [ 1]:0x2 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:25:00.093 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:00.351 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c92b56c1bb54ab884d2bfe725c8779f 00:25:00.351 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c92b56c1bb54ab884d2bfe725c8779f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:00.351 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:25:00.351 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:00.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:00.351 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:00.608 13:09:12 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:25:00.874 13:09:13 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:25:00.874 13:09:13 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fc5da725-6c22-4f02-8bbc-61ea931c1431 -a 10.0.0.2 -s 4420 -i 4 00:25:01.137 13:09:13 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:25:01.137 13:09:13 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:25:01.137 13:09:13 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.137 13:09:13 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:25:01.137 13:09:13 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:25:01.137 13:09:13 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:25:03.033 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:25:03.291 [ 0]:0x2 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c92b56c1bb54ab884d2bfe725c8779f 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c92b56c1bb54ab884d2bfe725c8779f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:03.291 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:25:03.547 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:25:03.547 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:25:03.547 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:03.547 [ 0]:0x1 00:25:03.547 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:25:03.547 13:09:15 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:03.547 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8d86f2b6c3f47f197143b375c83c774 00:25:03.547 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8d86f2b6c3f47f197143b375c83c774 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:03.547 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:25:03.547 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:03.547 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:25:03.805 [ 1]:0x2 00:25:03.805 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:25:03.805 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:03.805 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c92b56c1bb54ab884d2bfe725c8779f 00:25:03.805 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c92b56c1bb54ab884d2bfe725c8779f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:03.805 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:25:04.062 [ 0]:0x2 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c92b56c1bb54ab884d2bfe725c8779f 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c92b56c1bb54ab884d2bfe725c8779f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:25:04.062 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:04.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:04.318 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:25:04.597 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:25:04.597 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fc5da725-6c22-4f02-8bbc-61ea931c1431 -a 10.0.0.2 -s 4420 -i 4 00:25:04.597 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:25:04.597 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.597 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.597 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:25:04.598 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:25:04.598 13:09:16 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:25:06.496 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:25:06.754 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:06.754 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:25:06.754 [ 0]:0x1 00:25:06.754 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:25:06.754 13:09:18 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8d86f2b6c3f47f197143b375c83c774 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8d86f2b6c3f47f197143b375c83c774 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:25:06.754 [ 1]:0x2 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c92b56c1bb54ab884d2bfe725c8779f 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c92b56c1bb54ab884d2bfe725c8779f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:06.754 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:25:07.011 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:25:07.269 [ 0]:0x2 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c92b56c1bb54ab884d2bfe725c8779f 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c92b56c1bb54ab884d2bfe725c8779f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:07.269 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:25:07.527 [2024-07-15 13:09:19.803642] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:25:07.527 2024/07/15 13:09:19 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:25:07.527 request: 00:25:07.527 { 00:25:07.527 "method": "nvmf_ns_remove_host", 00:25:07.527 "params": { 00:25:07.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.527 "nsid": 2, 00:25:07.527 "host": "nqn.2016-06.io.spdk:host1" 00:25:07.527 } 00:25:07.527 } 00:25:07.527 Got JSON-RPC error response 00:25:07.527 GoRPCClient: error on JSON-RPC call 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:25:07.527 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:25:07.528 [ 0]:0x2 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c92b56c1bb54ab884d2bfe725c8779f 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c92b56c1bb54ab884d2bfe725c8779f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:07.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=103938 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 103938 /var/tmp/host.sock 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 103938 ']' 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:25:07.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.528 13:09:19 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:25:07.786 [2024-07-15 13:09:20.055263] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:25:07.786 [2024-07-15 13:09:20.055398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103938 ] 00:25:07.786 [2024-07-15 13:09:20.195063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.043 [2024-07-15 13:09:20.281104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.976 13:09:21 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.976 13:09:21 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:25:08.976 13:09:21 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:09.234 13:09:21 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:09.492 13:09:21 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 95b7b6a9-5890-40a9-b675-f34d73ba1f4d 00:25:09.492 13:09:21 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@763 -- # tr -d - 00:25:09.492 13:09:21 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 95B7B6A9589040A9B675F34D73BA1F4D -i 00:25:09.749 13:09:22 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 796f59d9-012b-4357-882a-85760255632d 00:25:09.749 13:09:22 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@763 -- # tr -d - 00:25:09.749 13:09:22 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 796F59D9012B4357882A85760255632D -i 00:25:10.007 13:09:22 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:25:10.573 13:09:22 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:25:10.830 13:09:23 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:25:10.830 13:09:23 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:25:11.394 nvme0n1 00:25:11.394 13:09:23 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:25:11.394 13:09:23 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:25:11.652 nvme1n2 00:25:11.652 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:25:11.652 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:25:11.652 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:25:11.652 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:25:11.652 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:25:12.219 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:25:12.219 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:25:12.219 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:25:12.219 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:25:12.476 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 95b7b6a9-5890-40a9-b675-f34d73ba1f4d == \9\5\b\7\b\6\a\9\-\5\8\9\0\-\4\0\a\9\-\b\6\7\5\-\f\3\4\d\7\3\b\a\1\f\4\d ]] 00:25:12.476 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:25:12.476 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:25:12.476 13:09:24 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 796f59d9-012b-4357-882a-85760255632d == \7\9\6\f\5\9\d\9\-\0\1\2\b\-\4\3\5\7\-\8\8\2\a\-\8\5\7\6\0\2\5\5\6\3\2\d ]] 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 103938 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 103938 ']' 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 103938 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 103938 00:25:13.042 killing process with pid 103938 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 103938' 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 103938 00:25:13.042 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 103938 00:25:13.300 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@492 -- # nvmfcleanup 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.562 rmmod nvme_tcp 00:25:13.562 rmmod nvme_fabrics 00:25:13.562 rmmod nvme_keyring 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@493 -- # '[' -n 103567 ']' 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@494 -- # killprocess 103567 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 103567 ']' 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 103567 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 103567 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:13.562 killing process with pid 103567 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 103567' 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 103567 00:25:13.562 13:09:25 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 103567 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@282 -- # remove_spdk_ns 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:25:13.834 ************************************ 00:25:13.834 END TEST nvmf_ns_masking 00:25:13.834 ************************************ 00:25:13.834 00:25:13.834 real 0m19.709s 00:25:13.834 user 0m28.223s 00:25:13.834 sys 0m5.889s 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@46 -- # [[ 1 -eq 1 ]] 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@47 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:13.834 ************************************ 00:25:13.834 START TEST nvmf_interrupt 00:25:13.834 ************************************ 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp 00:25:13.834 * Looking for test storage... 00:25:13.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.834 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@12 -- # nvmftestinit 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@452 -- # prepare_net_devs 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@414 -- # local -g is_hw=no 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@416 -- # remove_spdk_ns 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@436 -- # nvmf_veth_init 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:25:14.093 Cannot find device "nvmf_tgt_br" 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@159 -- # true 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:25:14.093 Cannot find device "nvmf_tgt_br2" 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@160 -- # true 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:25:14.093 Cannot find device "nvmf_tgt_br" 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:25:14.093 Cannot find device "nvmf_tgt_br2" 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:14.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:14.093 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:14.093 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:14.094 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:14.094 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:25:14.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:25:14.353 00:25:14.353 --- 10.0.0.2 ping statistics --- 00:25:14.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.353 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:25:14.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:14.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:25:14.353 00:25:14.353 --- 10.0.0.3 ping statistics --- 00:25:14.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.353 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:14.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:25:14.353 00:25:14.353 --- 10.0.0.1 ping statistics --- 00:25:14.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.353 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@437 -- # return 0 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@13 -- # nvmfappstart -m 0x3 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@485 -- # nvmfpid=104293 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@486 -- # waitforlisten 104293 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@829 -- # '[' -z 104293 ']' 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.353 13:09:26 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:14.353 [2024-07-15 13:09:26.808886] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:14.353 [2024-07-15 13:09:26.810119] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:25:14.353 [2024-07-15 13:09:26.810823] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.612 [2024-07-15 13:09:26.956331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:14.612 [2024-07-15 13:09:27.047372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.612 [2024-07-15 13:09:27.047462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.612 [2024-07-15 13:09:27.047496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.612 [2024-07-15 13:09:27.047512] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.612 [2024-07-15 13:09:27.047524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.612 [2024-07-15 13:09:27.047674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.612 [2024-07-15 13:09:27.047710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.871 [2024-07-15 13:09:27.108151] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:14.871 [2024-07-15 13:09:27.108309] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:14.871 [2024-07-15 13:09:27.108526] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@862 -- # return 0 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@14 -- # setup_bdev_aio 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@75 -- # uname -s 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:25:14.871 5000+0 records in 00:25:14.871 5000+0 records out 00:25:14.871 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0245418 s, 417 MB/s 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:14.871 AIO0 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@16 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:14.871 [2024-07-15 13:09:27.244642] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.871 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:14.872 [2024-07-15 13:09:27.272899] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@21 -- # for i in {0..1} 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@22 -- # reactor_is_idle 104293 0 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 104293 0 idle 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=104293 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 104293 -w 256 00:25:14.872 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 104293 root 20 0 64.2g 44288 31872 S 0.0 0.4 0:00.26 reactor_0' 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 104293 root 20 0 64.2g 44288 31872 S 0.0 0.4 0:00.26 reactor_0 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@21 -- # for i in {0..1} 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@22 -- # reactor_is_idle 104293 1 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 104293 1 idle 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=104293 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 104293 -w 256 00:25:15.130 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 104307 root 20 0 64.2g 44288 31872 S 0.0 0.4 0:00.00 reactor_1' 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 104307 root 20 0 64.2g 44288 31872 S 0.0 0.4 0:00.00 reactor_1 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@25 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@31 -- # perf_pid=104362 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@33 -- # for i in {0..1} 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@34 -- # reactor_is_busy 104293 0 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 104293 0 busy 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=104293 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 104293 -w 256 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 104293 root 20 0 64.2g 44288 31872 S 0.0 0.4 0:00.27 reactor_0' 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 104293 root 20 0 64.2g 44288 31872 S 0.0 0.4 0:00.27 reactor_0 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ 0 -lt 70 ]] 00:25:15.389 13:09:27 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@29 -- # sleep 1 00:25:16.323 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j-- )) 00:25:16.323 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:25:16.323 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 104293 -w 256 00:25:16.323 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 104293 root 20 0 64.2g 44800 32000 R 73.3 0.4 0:01.01 reactor_0' 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 104293 root 20 0 64.2g 44800 32000 R 73.3 0.4 0:01.01 reactor_0 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=73.3 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=73 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ 73 -lt 70 ]] 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@33 -- # for i in {0..1} 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@34 -- # reactor_is_busy 104293 1 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 104293 1 busy 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=104293 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 104293 -w 256 00:25:16.581 13:09:28 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:25:16.840 13:09:29 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 104307 root 20 0 64.2g 44800 32000 R 66.7 0.4 0:00.87 reactor_1' 00:25:16.840 13:09:29 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 104307 root 20 0 64.2g 44800 32000 R 66.7 0.4 0:00.87 reactor_1 00:25:16.840 13:09:29 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:25:16.840 13:09:29 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:25:16.840 13:09:29 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=66.7 00:25:16.840 13:09:29 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=66 00:25:16.840 13:09:29 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:25:16.840 13:09:29 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ 66 -lt 70 ]] 00:25:16.840 13:09:29 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@29 -- # sleep 1 00:25:17.772 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j-- )) 00:25:17.772 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:25:17.772 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 104293 -w 256 00:25:17.772 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:25:18.030 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 104307 root 20 0 64.2g 44800 32000 R 60.0 0.4 0:01.69 reactor_1' 00:25:18.030 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 104307 root 20 0 64.2g 44800 32000 R 60.0 0.4 0:01.69 reactor_1 00:25:18.030 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:25:18.030 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:25:18.030 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=60.0 00:25:18.030 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=60 00:25:18.030 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:25:18.030 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ 60 -lt 70 ]] 00:25:18.030 13:09:30 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@29 -- # sleep 1 00:25:18.967 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j-- )) 00:25:18.967 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:25:18.967 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 104293 -w 256 00:25:18.967 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:25:19.225 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 104307 root 20 0 64.2g 44800 32000 R 73.3 0.4 0:02.49 reactor_1' 00:25:19.225 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 104307 root 20 0 64.2g 44800 32000 R 73.3 0.4 0:02.49 reactor_1 00:25:19.226 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:25:19.226 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:25:19.226 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=73.3 00:25:19.226 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=73 00:25:19.226 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:25:19.226 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ 73 -lt 70 ]] 00:25:19.226 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:25:19.226 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:25:19.226 13:09:31 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@37 -- # wait 104362 00:25:25.889 Initializing NVMe Controllers 00:25:25.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:25:25.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:25:25.889 Initialization complete. Launching workers. 00:25:25.889 ======================================================== 00:25:25.889 Latency(us) 00:25:25.889 Device Information : IOPS MiB/s Average min max 00:25:25.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 5787.30 22.61 11063.91 1314.42 91994.51 00:25:25.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 5798.60 22.65 11039.37 1431.13 98090.23 00:25:25.889 ======================================================== 00:25:25.889 Total : 11585.90 45.26 11051.63 1314.42 98090.23 00:25:25.889 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@39 -- # for i in {0..1} 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@40 -- # reactor_is_idle 104293 0 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 104293 0 idle 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=104293 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 104293 -w 256 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 104293 root 20 0 64.2g 44800 32000 S 0.0 0.4 0:07.14 reactor_0' 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 104293 root 20 0 64.2g 44800 32000 S 0.0 0.4 0:07.14 reactor_0 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@39 -- # for i in {0..1} 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@40 -- # reactor_is_idle 104293 1 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 104293 1 idle 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=104293 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 104293 -w 256 00:25:25.889 13:09:37 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 104307 root 20 0 64.2g 44800 32000 S 0.0 0.4 0:06.97 reactor_1' 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 104307 root 20 0 64.2g 44800 32000 S 0.0 0.4 0:06.97 reactor_1 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@43 -- # cleanup 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@44 -- # nvmftestfini 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@492 -- # nvmfcleanup 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:25.889 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:25.889 rmmod nvme_tcp 00:25:25.889 rmmod nvme_fabrics 00:25:25.889 rmmod nvme_keyring 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@493 -- # '[' -n 104293 ']' 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@494 -- # killprocess 104293 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@948 -- # '[' -z 104293 ']' 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@952 -- # kill -0 104293 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@953 -- # uname 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104293 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:26.148 killing process with pid 104293 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104293' 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@967 -- # kill 104293 00:25:26.148 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@972 -- # wait 104293 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@282 -- # remove_spdk_ns 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@1 -- # process_shm --id 0 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@806 -- # type=--id 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@807 -- # id=0 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:26.406 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:26.406 nvmf_trace.0 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@821 -- # return 0 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- target/interrupt.sh@1 -- # nvmftestfini 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@492 -- # nvmfcleanup 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:25:26.665 13:09:38 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@493 -- # '[' -n 104293 ']' 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@494 -- # killprocess 104293 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@948 -- # '[' -z 104293 ']' 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@952 -- # kill -0 104293 00:25:26.665 Process with pid 104293 is not found 00:25:26.665 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (104293) - No such process 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@975 -- # echo 'Process with pid 104293 is not found' 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@282 -- # remove_spdk_ns 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:25:26.665 00:25:26.665 real 0m12.772s 00:25:26.665 user 0m27.390s 00:25:26.665 sys 0m7.862s 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:26.665 ************************************ 00:25:26.665 END TEST nvmf_interrupt 00:25:26.665 ************************************ 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@51 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:26.665 ************************************ 00:25:26.665 START TEST nvmf_host_management 00:25:26.665 ************************************ 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:25:26.665 * Looking for test storage... 00:25:26.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.665 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@452 -- # prepare_net_devs 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # local -g is_hw=no 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # remove_spdk_ns 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:25:26.924 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # nvmf_veth_init 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:25:26.925 Cannot find device "nvmf_tgt_br" 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:25:26.925 Cannot find device "nvmf_tgt_br2" 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:25:26.925 Cannot find device "nvmf_tgt_br" 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:25:26.925 Cannot find device "nvmf_tgt_br2" 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:26.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:26.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:25:26.925 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:25:27.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:25:27.184 00:25:27.184 --- 10.0.0.2 ping statistics --- 00:25:27.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.184 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:25:27.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:27.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:25:27.184 00:25:27.184 --- 10.0.0.3 ping statistics --- 00:25:27.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.184 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:27.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:25:27.184 00:25:27.184 --- 10.0.0.1 ping statistics --- 00:25:27.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.184 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@437 -- # return 0 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@485 -- # nvmfpid=104727 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@486 -- # waitforlisten 104727 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 104727 ']' 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.184 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:27.184 [2024-07-15 13:09:39.559347] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:27.184 [2024-07-15 13:09:39.561029] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:25:27.184 [2024-07-15 13:09:39.561107] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.443 [2024-07-15 13:09:39.710136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:27.443 [2024-07-15 13:09:39.770394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.443 [2024-07-15 13:09:39.770456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.443 [2024-07-15 13:09:39.770468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.443 [2024-07-15 13:09:39.770477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.443 [2024-07-15 13:09:39.770484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.443 [2024-07-15 13:09:39.770550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.443 [2024-07-15 13:09:39.771602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:27.443 [2024-07-15 13:09:39.771720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:27.443 [2024-07-15 13:09:39.771739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.443 [2024-07-15 13:09:39.839084] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:27.443 [2024-07-15 13:09:39.839299] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:27.443 [2024-07-15 13:09:39.839465] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:27.443 [2024-07-15 13:09:39.839847] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:27.443 [2024-07-15 13:09:39.840180] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:27.443 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.443 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:25:27.443 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:25:27.443 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.443 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:27.443 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.443 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:27.443 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.443 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:27.702 [2024-07-15 13:09:39.912676] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:27.702 Malloc0 00:25:27.702 [2024-07-15 13:09:39.972892] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.702 13:09:39 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:27.702 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=104784 00:25:27.702 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 104784 /var/tmp/bdevperf.sock 00:25:27.702 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 104784 ']' 00:25:27.702 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.702 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.702 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@536 -- # config=() 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@536 -- # local subsystem config 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:25:27.703 { 00:25:27.703 "params": { 00:25:27.703 "name": "Nvme$subsystem", 00:25:27.703 "trtype": "$TEST_TRANSPORT", 00:25:27.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.703 "adrfam": "ipv4", 00:25:27.703 "trsvcid": "$NVMF_PORT", 00:25:27.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.703 "hdgst": ${hdgst:-false}, 00:25:27.703 "ddgst": ${ddgst:-false} 00:25:27.703 }, 00:25:27.703 "method": "bdev_nvme_attach_controller" 00:25:27.703 } 00:25:27.703 EOF 00:25:27.703 )") 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # cat 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # jq . 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@561 -- # IFS=, 00:25:27.703 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:25:27.703 "params": { 00:25:27.703 "name": "Nvme0", 00:25:27.703 "trtype": "tcp", 00:25:27.703 "traddr": "10.0.0.2", 00:25:27.703 "adrfam": "ipv4", 00:25:27.703 "trsvcid": "4420", 00:25:27.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:27.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:27.703 "hdgst": false, 00:25:27.703 "ddgst": false 00:25:27.703 }, 00:25:27.703 "method": "bdev_nvme_attach_controller" 00:25:27.703 }' 00:25:27.703 [2024-07-15 13:09:40.072370] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:25:27.703 [2024-07-15 13:09:40.072453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104784 ] 00:25:27.961 [2024-07-15 13:09:40.213885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.961 [2024-07-15 13:09:40.299753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.220 Running I/O for 10 seconds... 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=20 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 20 -ge 100 ']' 00:25:28.220 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.481 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:28.481 [2024-07-15 13:09:40.881922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.481 [2024-07-15 13:09:40.881974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.481 [2024-07-15 13:09:40.881998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.481 [2024-07-15 13:09:40.882012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.481 [2024-07-15 13:09:40.882027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.481 [2024-07-15 13:09:40.882040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.481 [2024-07-15 13:09:40.882055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.481 [2024-07-15 13:09:40.882069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.481 [2024-07-15 13:09:40.882082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5baf0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891755] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891873] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891891] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891905] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891918] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891931] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891945] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891959] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891972] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891985] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.891998] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892011] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892025] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892038] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892051] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892065] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892079] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892093] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892106] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892120] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892133] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892146] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892159] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892172] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892185] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892199] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892212] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892227] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892240] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892253] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892266] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892280] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892294] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892307] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892320] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892334] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892347] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.481 [2024-07-15 13:09:40.892360] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892374] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892387] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892401] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892415] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892429] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892442] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892456] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892469] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892482] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892495] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892510] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892523] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892536] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892550] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892563] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892576] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892589] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892603] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892617] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892630] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892643] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892658] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892672] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 [2024-07-15 13:09:40.892685] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed8a0 is same with the state(5) to be set 00:25:28.482 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.482 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:25:28.482 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.482 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:28.482 [2024-07-15 13:09:40.898568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5baf0 (9): Bad file descriptor 00:25:28.482 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.482 13:09:40 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:25:28.482 [2024-07-15 13:09:40.917095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.482 [2024-07-15 13:09:40.917752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.482 [2024-07-15 13:09:40.917781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.917799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.917816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.917833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.917851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.917867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.917886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.917904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.917921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.917937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.917955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.917971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.917990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.918969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.918986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.919001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.919019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.483 [2024-07-15 13:09:40.919036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.483 [2024-07-15 13:09:40.919133] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a5b820 was disconnected and freed. reset controller. 00:25:28.483 [2024-07-15 13:09:40.919153] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:28.483 [2024-07-15 13:09:40.920642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:28.483 task offset: 57216 on job bdev=Nvme0n1 fails 00:25:28.483 00:25:28.483 Latency(us) 00:25:28.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.483 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:28.483 Job: Nvme0n1 ended in about 0.47 seconds with error 00:25:28.483 Verification LBA range: start 0x0 length 0x400 00:25:28.483 Nvme0n1 : 0.47 942.22 58.89 134.90 0.00 57626.33 2591.65 56718.43 00:25:28.483 =================================================================================================================== 00:25:28.483 Total : 942.22 58.89 134.90 0.00 57626.33 2591.65 56718.43 00:25:28.483 [2024-07-15 13:09:40.923447] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:28.483 [2024-07-15 13:09:40.927276] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:29.858 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 104784 00:25:29.858 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (104784) - No such process 00:25:29.858 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:25:29.858 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:25:29.858 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:25:29.858 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:29.858 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@536 -- # config=() 00:25:29.858 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@536 -- # local subsystem config 00:25:29.858 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:25:29.858 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:25:29.858 { 00:25:29.858 "params": { 00:25:29.858 "name": "Nvme$subsystem", 00:25:29.858 "trtype": "$TEST_TRANSPORT", 00:25:29.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.858 "adrfam": "ipv4", 00:25:29.858 "trsvcid": "$NVMF_PORT", 00:25:29.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.859 "hdgst": ${hdgst:-false}, 00:25:29.859 "ddgst": ${ddgst:-false} 00:25:29.859 }, 00:25:29.859 "method": "bdev_nvme_attach_controller" 00:25:29.859 } 00:25:29.859 EOF 00:25:29.859 )") 00:25:29.859 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # cat 00:25:29.859 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # jq . 00:25:29.859 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@561 -- # IFS=, 00:25:29.859 13:09:41 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:25:29.859 "params": { 00:25:29.859 "name": "Nvme0", 00:25:29.859 "trtype": "tcp", 00:25:29.859 "traddr": "10.0.0.2", 00:25:29.859 "adrfam": "ipv4", 00:25:29.859 "trsvcid": "4420", 00:25:29.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:29.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:29.859 "hdgst": false, 00:25:29.859 "ddgst": false 00:25:29.859 }, 00:25:29.859 "method": "bdev_nvme_attach_controller" 00:25:29.859 }' 00:25:29.859 [2024-07-15 13:09:41.968415] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:25:29.859 [2024-07-15 13:09:41.968547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104826 ] 00:25:29.859 [2024-07-15 13:09:42.109329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.859 [2024-07-15 13:09:42.196673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.117 Running I/O for 1 seconds... 00:25:31.050 00:25:31.050 Latency(us) 00:25:31.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.050 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.050 Verification LBA range: start 0x0 length 0x400 00:25:31.050 Nvme0n1 : 1.04 1354.83 84.68 0.00 0.00 46291.02 9889.98 41704.73 00:25:31.050 =================================================================================================================== 00:25:31.050 Total : 1354.83 84.68 0.00 0.00 46291.02 9889.98 41704.73 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # nvmfcleanup 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:31.307 rmmod nvme_tcp 00:25:31.307 rmmod nvme_fabrics 00:25:31.307 rmmod nvme_keyring 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # '[' -n 104727 ']' 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # killprocess 104727 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 104727 ']' 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 104727 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104727 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:31.307 killing process with pid 104727 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104727' 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 104727 00:25:31.307 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 104727 00:25:31.566 [2024-07-15 13:09:43.856128] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@282 -- # remove_spdk_ns 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:25:31.566 00:25:31.566 real 0m4.870s 00:25:31.566 user 0m16.343s 00:25:31.566 sys 0m2.661s 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:25:31.566 ************************************ 00:25:31.566 END TEST nvmf_host_management 00:25:31.566 ************************************ 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@52 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:31.566 ************************************ 00:25:31.566 START TEST nvmf_lvol 00:25:31.566 ************************************ 00:25:31.566 13:09:43 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:25:31.566 * Looking for test storage... 00:25:31.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.824 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@452 -- # prepare_net_devs 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # local -g is_hw=no 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # remove_spdk_ns 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # nvmf_veth_init 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:25:31.825 Cannot find device "nvmf_tgt_br" 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:25:31.825 Cannot find device "nvmf_tgt_br2" 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # true 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:25:31.825 Cannot find device "nvmf_tgt_br" 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:25:31.825 Cannot find device "nvmf_tgt_br2" 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:31.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:31.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:31.825 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:25:32.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:25:32.084 00:25:32.084 --- 10.0.0.2 ping statistics --- 00:25:32.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.084 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:25:32.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:32.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:25:32.084 00:25:32.084 --- 10.0.0.3 ping statistics --- 00:25:32.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.084 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:32.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:25:32.084 00:25:32.084 --- 10.0.0.1 ping statistics --- 00:25:32.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.084 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@437 -- # return 0 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@485 -- # nvmfpid=105036 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@486 -- # waitforlisten 105036 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 105036 ']' 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.084 13:09:44 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:32.084 [2024-07-15 13:09:44.503361] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:32.084 [2024-07-15 13:09:44.504487] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:25:32.084 [2024-07-15 13:09:44.504551] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.342 [2024-07-15 13:09:44.637613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:32.342 [2024-07-15 13:09:44.707130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.342 [2024-07-15 13:09:44.707190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.342 [2024-07-15 13:09:44.707202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.342 [2024-07-15 13:09:44.707210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.342 [2024-07-15 13:09:44.707217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.342 [2024-07-15 13:09:44.707338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.342 [2024-07-15 13:09:44.707815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.342 [2024-07-15 13:09:44.707827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.342 [2024-07-15 13:09:44.754598] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:32.342 [2024-07-15 13:09:44.754670] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:32.342 [2024-07-15 13:09:44.755056] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:32.342 [2024-07-15 13:09:44.755294] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:33.278 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.278 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:25:33.278 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:25:33.278 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.278 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:33.278 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.278 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:33.278 [2024-07-15 13:09:45.680935] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.278 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:33.535 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:25:33.535 13:09:45 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:33.794 13:09:46 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:25:33.794 13:09:46 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:25:34.052 13:09:46 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:25:34.311 13:09:46 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5d3d1919-3138-43ba-8764-48da06008238 00:25:34.311 13:09:46 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d3d1919-3138-43ba-8764-48da06008238 lvol 20 00:25:34.569 13:09:47 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=21abe8bc-5ada-44b0-b787-c6d7315163be 00:25:34.569 13:09:47 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:34.827 13:09:47 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 21abe8bc-5ada-44b0-b787-c6d7315163be 00:25:35.393 13:09:47 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:35.393 [2024-07-15 13:09:47.844719] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.651 13:09:47 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:35.651 13:09:48 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:25:35.651 13:09:48 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=105180 00:25:35.651 13:09:48 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:25:37.026 13:09:49 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 21abe8bc-5ada-44b0-b787-c6d7315163be MY_SNAPSHOT 00:25:37.284 13:09:49 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=392313c5-f9e6-4aaa-84f6-9aabc9cb6260 00:25:37.284 13:09:49 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 21abe8bc-5ada-44b0-b787-c6d7315163be 30 00:25:37.541 13:09:49 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 392313c5-f9e6-4aaa-84f6-9aabc9cb6260 MY_CLONE 00:25:37.866 13:09:50 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ee4a4c70-50a2-4d7b-9a06-d565b0a9449e 00:25:37.866 13:09:50 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ee4a4c70-50a2-4d7b-9a06-d565b0a9449e 00:25:38.813 13:09:50 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 105180 00:25:46.921 Initializing NVMe Controllers 00:25:46.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:25:46.921 Controller IO queue size 128, less than required. 00:25:46.921 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:25:46.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:25:46.921 Initialization complete. Launching workers. 00:25:46.921 ======================================================== 00:25:46.921 Latency(us) 00:25:46.921 Device Information : IOPS MiB/s Average min max 00:25:46.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9180.08 35.86 13956.08 1808.18 67068.90 00:25:46.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9188.08 35.89 13932.82 5115.02 64746.76 00:25:46.921 ======================================================== 00:25:46.921 Total : 18368.16 71.75 13944.44 1808.18 67068.90 00:25:46.921 00:25:46.921 13:09:58 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:46.921 13:09:58 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 21abe8bc-5ada-44b0-b787-c6d7315163be 00:25:46.921 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d3d1919-3138-43ba-8764-48da06008238 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # nvmfcleanup 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.190 rmmod nvme_tcp 00:25:47.190 rmmod nvme_fabrics 00:25:47.190 rmmod nvme_keyring 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # '[' -n 105036 ']' 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # killprocess 105036 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 105036 ']' 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 105036 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.190 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105036 00:25:47.453 killing process with pid 105036 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105036' 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 105036 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 105036 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@282 -- # remove_spdk_ns 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.453 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:25:47.711 00:25:47.711 real 0m15.955s 00:25:47.711 user 0m55.053s 00:25:47.711 sys 0m6.876s 00:25:47.711 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.711 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:25:47.711 ************************************ 00:25:47.711 END TEST nvmf_lvol 00:25:47.711 ************************************ 00:25:47.711 13:09:59 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:25:47.711 13:09:59 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@53 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:25:47.711 13:09:59 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:47.711 13:09:59 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.711 13:09:59 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:47.711 ************************************ 00:25:47.711 START TEST nvmf_lvs_grow 00:25:47.711 ************************************ 00:25:47.711 13:09:59 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:25:47.711 * Looking for test storage... 00:25:47.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@452 -- # prepare_net_devs 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # local -g is_hw=no 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # remove_spdk_ns 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # nvmf_veth_init 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:25:47.712 Cannot find device "nvmf_tgt_br" 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:25:47.712 Cannot find device "nvmf_tgt_br2" 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # true 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:25:47.712 Cannot find device "nvmf_tgt_br" 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:25:47.712 Cannot find device "nvmf_tgt_br2" 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:25:47.712 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:25:47.970 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:47.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:47.970 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:25:47.970 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:47.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:47.970 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:25:47.970 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:25:47.970 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:47.970 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:25:47.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:25:47.971 00:25:47.971 --- 10.0.0.2 ping statistics --- 00:25:47.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.971 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:25:47.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:47.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:25:47.971 00:25:47.971 --- 10.0.0.3 ping statistics --- 00:25:47.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.971 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:47.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:25:47.971 00:25:47.971 --- 10.0.0.1 ping statistics --- 00:25:47.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.971 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@437 -- # return 0 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:47.971 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:48.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.230 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@485 -- # nvmfpid=105533 00:25:48.230 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@486 -- # waitforlisten 105533 00:25:48.230 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 105533 ']' 00:25:48.230 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:25:48.230 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.230 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.230 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.230 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.230 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:48.230 [2024-07-15 13:10:00.517686] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:48.230 [2024-07-15 13:10:00.519324] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:25:48.231 [2024-07-15 13:10:00.519419] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.231 [2024-07-15 13:10:00.657270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.488 [2024-07-15 13:10:00.716580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.488 [2024-07-15 13:10:00.716638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.488 [2024-07-15 13:10:00.716649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.488 [2024-07-15 13:10:00.716658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.488 [2024-07-15 13:10:00.716665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.488 [2024-07-15 13:10:00.716693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.488 [2024-07-15 13:10:00.763542] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:48.488 [2024-07-15 13:10:00.763868] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:48.488 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.488 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:25:48.488 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:25:48.488 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.488 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:48.488 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.488 13:10:00 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:48.745 [2024-07-15 13:10:01.081414] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:48.745 ************************************ 00:25:48.745 START TEST lvs_grow_clean 00:25:48.745 ************************************ 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:48.745 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:49.002 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:25:49.003 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:25:49.260 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=488b755a-b508-4e78-82d8-3a52159f9879 00:25:49.260 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 488b755a-b508-4e78-82d8-3a52159f9879 00:25:49.260 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:25:49.518 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:25:49.519 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:25:49.519 13:10:01 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 488b755a-b508-4e78-82d8-3a52159f9879 lvol 150 00:25:50.086 13:10:02 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1d3beb92-d53b-4421-8a99-ec10d210ac34 00:25:50.086 13:10:02 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:50.086 13:10:02 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:25:50.344 [2024-07-15 13:10:02.625284] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:25:50.344 [2024-07-15 13:10:02.626010] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:25:50.344 true 00:25:50.344 13:10:02 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 488b755a-b508-4e78-82d8-3a52159f9879 00:25:50.344 13:10:02 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:25:50.602 13:10:02 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:25:50.602 13:10:02 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:50.860 13:10:03 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d3beb92-d53b-4421-8a99-ec10d210ac34 00:25:51.425 13:10:03 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:51.425 [2024-07-15 13:10:03.833630] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.425 13:10:03 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=105683 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 105683 /var/tmp/bdevperf.sock 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 105683 ']' 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.683 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:25:51.940 [2024-07-15 13:10:04.200738] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:25:51.940 [2024-07-15 13:10:04.200886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105683 ] 00:25:51.940 [2024-07-15 13:10:04.345981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.198 [2024-07-15 13:10:04.427024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.198 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.198 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:25:52.198 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:25:52.456 Nvme0n1 00:25:52.456 13:10:04 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:25:53.021 [ 00:25:53.021 { 00:25:53.021 "aliases": [ 00:25:53.021 "1d3beb92-d53b-4421-8a99-ec10d210ac34" 00:25:53.021 ], 00:25:53.021 "assigned_rate_limits": { 00:25:53.021 "r_mbytes_per_sec": 0, 00:25:53.021 "rw_ios_per_sec": 0, 00:25:53.021 "rw_mbytes_per_sec": 0, 00:25:53.021 "w_mbytes_per_sec": 0 00:25:53.021 }, 00:25:53.021 "block_size": 4096, 00:25:53.021 "claimed": false, 00:25:53.021 "driver_specific": { 00:25:53.021 "mp_policy": "active_passive", 00:25:53.021 "nvme": [ 00:25:53.021 { 00:25:53.021 "ctrlr_data": { 00:25:53.021 "ana_reporting": false, 00:25:53.021 "cntlid": 1, 00:25:53.022 "firmware_revision": "24.09", 00:25:53.022 "model_number": "SPDK bdev Controller", 00:25:53.022 "multi_ctrlr": true, 00:25:53.022 "oacs": { 00:25:53.022 "firmware": 0, 00:25:53.022 "format": 0, 00:25:53.022 "ns_manage": 0, 00:25:53.022 "security": 0 00:25:53.022 }, 00:25:53.022 "serial_number": "SPDK0", 00:25:53.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:53.022 "vendor_id": "0x8086" 00:25:53.022 }, 00:25:53.022 "ns_data": { 00:25:53.022 "can_share": true, 00:25:53.022 "id": 1 00:25:53.022 }, 00:25:53.022 "trid": { 00:25:53.022 "adrfam": "IPv4", 00:25:53.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:53.022 "traddr": "10.0.0.2", 00:25:53.022 "trsvcid": "4420", 00:25:53.022 "trtype": "TCP" 00:25:53.022 }, 00:25:53.022 "vs": { 00:25:53.022 "nvme_version": "1.3" 00:25:53.022 } 00:25:53.022 } 00:25:53.022 ] 00:25:53.022 }, 00:25:53.022 "memory_domains": [ 00:25:53.022 { 00:25:53.022 "dma_device_id": "system", 00:25:53.022 "dma_device_type": 1 00:25:53.022 } 00:25:53.022 ], 00:25:53.022 "name": "Nvme0n1", 00:25:53.022 "num_blocks": 38912, 00:25:53.022 "product_name": "NVMe disk", 00:25:53.022 "supported_io_types": { 00:25:53.022 "abort": true, 00:25:53.022 "compare": true, 00:25:53.022 "compare_and_write": true, 00:25:53.022 "copy": true, 00:25:53.022 "flush": true, 00:25:53.022 "get_zone_info": false, 00:25:53.022 "nvme_admin": true, 00:25:53.022 "nvme_io": true, 00:25:53.022 "nvme_io_md": false, 00:25:53.022 "nvme_iov_md": false, 00:25:53.022 "read": true, 00:25:53.022 "reset": true, 00:25:53.022 "seek_data": false, 00:25:53.022 "seek_hole": false, 00:25:53.022 "unmap": true, 00:25:53.022 "write": true, 00:25:53.022 "write_zeroes": true, 00:25:53.022 "zcopy": false, 00:25:53.022 "zone_append": false, 00:25:53.022 "zone_management": false 00:25:53.022 }, 00:25:53.022 "uuid": "1d3beb92-d53b-4421-8a99-ec10d210ac34", 00:25:53.022 "zoned": false 00:25:53.022 } 00:25:53.022 ] 00:25:53.022 13:10:05 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=105717 00:25:53.022 13:10:05 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:25:53.022 13:10:05 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:53.022 Running I/O for 10 seconds... 00:25:54.395 Latency(us) 00:25:54.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:54.395 Nvme0n1 : 1.00 6937.00 27.10 0.00 0.00 0.00 0.00 0.00 00:25:54.395 =================================================================================================================== 00:25:54.395 Total : 6937.00 27.10 0.00 0.00 0.00 0.00 0.00 00:25:54.395 00:25:54.972 13:10:07 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 488b755a-b508-4e78-82d8-3a52159f9879 00:25:55.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:55.241 Nvme0n1 : 2.00 6705.00 26.19 0.00 0.00 0.00 0.00 0.00 00:25:55.241 =================================================================================================================== 00:25:55.241 Total : 6705.00 26.19 0.00 0.00 0.00 0.00 0.00 00:25:55.241 00:25:55.498 true 00:25:55.498 13:10:07 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 488b755a-b508-4e78-82d8-3a52159f9879 00:25:55.498 13:10:07 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:25:55.754 13:10:08 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:25:55.754 13:10:08 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:25:55.754 13:10:08 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 105717 00:25:56.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:56.012 Nvme0n1 : 3.00 6859.33 26.79 0.00 0.00 0.00 0.00 0.00 00:25:56.012 =================================================================================================================== 00:25:56.012 Total : 6859.33 26.79 0.00 0.00 0.00 0.00 0.00 00:25:56.012 00:25:57.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:57.405 Nvme0n1 : 4.00 6612.50 25.83 0.00 0.00 0.00 0.00 0.00 00:25:57.405 =================================================================================================================== 00:25:57.405 Total : 6612.50 25.83 0.00 0.00 0.00 0.00 0.00 00:25:57.405 00:25:57.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:57.971 Nvme0n1 : 5.00 6491.60 25.36 0.00 0.00 0.00 0.00 0.00 00:25:57.971 =================================================================================================================== 00:25:57.971 Total : 6491.60 25.36 0.00 0.00 0.00 0.00 0.00 00:25:57.971 00:25:59.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:59.344 Nvme0n1 : 6.00 6388.83 24.96 0.00 0.00 0.00 0.00 0.00 00:25:59.344 =================================================================================================================== 00:25:59.344 Total : 6388.83 24.96 0.00 0.00 0.00 0.00 0.00 00:25:59.344 00:26:00.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:00.279 Nvme0n1 : 7.00 6271.29 24.50 0.00 0.00 0.00 0.00 0.00 00:26:00.279 =================================================================================================================== 00:26:00.279 Total : 6271.29 24.50 0.00 0.00 0.00 0.00 0.00 00:26:00.279 00:26:01.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:01.211 Nvme0n1 : 8.00 6215.50 24.28 0.00 0.00 0.00 0.00 0.00 00:26:01.211 =================================================================================================================== 00:26:01.211 Total : 6215.50 24.28 0.00 0.00 0.00 0.00 0.00 00:26:01.211 00:26:02.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:02.150 Nvme0n1 : 9.00 6199.78 24.22 0.00 0.00 0.00 0.00 0.00 00:26:02.150 =================================================================================================================== 00:26:02.150 Total : 6199.78 24.22 0.00 0.00 0.00 0.00 0.00 00:26:02.150 00:26:03.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:03.085 Nvme0n1 : 10.00 6157.60 24.05 0.00 0.00 0.00 0.00 0.00 00:26:03.085 =================================================================================================================== 00:26:03.085 Total : 6157.60 24.05 0.00 0.00 0.00 0.00 0.00 00:26:03.085 00:26:03.085 00:26:03.085 Latency(us) 00:26:03.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:03.085 Nvme0n1 : 10.00 6167.64 24.09 0.00 0.00 20747.23 8162.21 55765.18 00:26:03.085 =================================================================================================================== 00:26:03.085 Total : 6167.64 24.09 0.00 0.00 20747.23 8162.21 55765.18 00:26:03.085 0 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 105683 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 105683 ']' 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 105683 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105683 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:03.085 killing process with pid 105683 00:26:03.085 Received shutdown signal, test time was about 10.000000 seconds 00:26:03.085 00:26:03.085 Latency(us) 00:26:03.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.085 =================================================================================================================== 00:26:03.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105683' 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 105683 00:26:03.085 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 105683 00:26:03.347 13:10:15 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:03.958 13:10:16 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:04.217 13:10:16 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 488b755a-b508-4e78-82d8-3a52159f9879 00:26:04.217 13:10:16 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:26:04.791 13:10:16 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:26:04.791 13:10:16 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:26:04.791 13:10:16 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:26:05.050 [2024-07-15 13:10:17.361363] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 488b755a-b508-4e78-82d8-3a52159f9879 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 488b755a-b508-4e78-82d8-3a52159f9879 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:05.050 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 488b755a-b508-4e78-82d8-3a52159f9879 00:26:05.616 2024/07/15 13:10:17 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:488b755a-b508-4e78-82d8-3a52159f9879], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:26:05.616 request: 00:26:05.616 { 00:26:05.616 "method": "bdev_lvol_get_lvstores", 00:26:05.616 "params": { 00:26:05.616 "uuid": "488b755a-b508-4e78-82d8-3a52159f9879" 00:26:05.616 } 00:26:05.616 } 00:26:05.616 Got JSON-RPC error response 00:26:05.616 GoRPCClient: error on JSON-RPC call 00:26:05.616 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:26:05.616 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:05.616 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:05.616 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:05.616 13:10:17 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:05.874 aio_bdev 00:26:05.874 13:10:18 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1d3beb92-d53b-4421-8a99-ec10d210ac34 00:26:05.874 13:10:18 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=1d3beb92-d53b-4421-8a99-ec10d210ac34 00:26:05.874 13:10:18 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:05.874 13:10:18 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:26:05.874 13:10:18 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:05.874 13:10:18 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:05.874 13:10:18 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:06.441 13:10:18 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d3beb92-d53b-4421-8a99-ec10d210ac34 -t 2000 00:26:06.699 [ 00:26:06.699 { 00:26:06.699 "aliases": [ 00:26:06.699 "lvs/lvol" 00:26:06.699 ], 00:26:06.699 "assigned_rate_limits": { 00:26:06.699 "r_mbytes_per_sec": 0, 00:26:06.699 "rw_ios_per_sec": 0, 00:26:06.699 "rw_mbytes_per_sec": 0, 00:26:06.699 "w_mbytes_per_sec": 0 00:26:06.699 }, 00:26:06.699 "block_size": 4096, 00:26:06.699 "claimed": false, 00:26:06.699 "driver_specific": { 00:26:06.699 "lvol": { 00:26:06.699 "base_bdev": "aio_bdev", 00:26:06.699 "clone": false, 00:26:06.699 "esnap_clone": false, 00:26:06.699 "lvol_store_uuid": "488b755a-b508-4e78-82d8-3a52159f9879", 00:26:06.699 "num_allocated_clusters": 38, 00:26:06.699 "snapshot": false, 00:26:06.699 "thin_provision": false 00:26:06.699 } 00:26:06.699 }, 00:26:06.699 "name": "1d3beb92-d53b-4421-8a99-ec10d210ac34", 00:26:06.699 "num_blocks": 38912, 00:26:06.699 "product_name": "Logical Volume", 00:26:06.699 "supported_io_types": { 00:26:06.699 "abort": false, 00:26:06.699 "compare": false, 00:26:06.699 "compare_and_write": false, 00:26:06.699 "copy": false, 00:26:06.699 "flush": false, 00:26:06.699 "get_zone_info": false, 00:26:06.699 "nvme_admin": false, 00:26:06.699 "nvme_io": false, 00:26:06.699 "nvme_io_md": false, 00:26:06.699 "nvme_iov_md": false, 00:26:06.699 "read": true, 00:26:06.699 "reset": true, 00:26:06.699 "seek_data": true, 00:26:06.699 "seek_hole": true, 00:26:06.699 "unmap": true, 00:26:06.699 "write": true, 00:26:06.699 "write_zeroes": true, 00:26:06.699 "zcopy": false, 00:26:06.699 "zone_append": false, 00:26:06.699 "zone_management": false 00:26:06.699 }, 00:26:06.699 "uuid": "1d3beb92-d53b-4421-8a99-ec10d210ac34", 00:26:06.699 "zoned": false 00:26:06.699 } 00:26:06.699 ] 00:26:06.699 13:10:19 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:26:06.699 13:10:19 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 488b755a-b508-4e78-82d8-3a52159f9879 00:26:06.699 13:10:19 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:26:07.265 13:10:19 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:26:07.265 13:10:19 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 488b755a-b508-4e78-82d8-3a52159f9879 00:26:07.265 13:10:19 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:26:07.830 13:10:20 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:26:07.830 13:10:20 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1d3beb92-d53b-4421-8a99-ec10d210ac34 00:26:08.088 13:10:20 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 488b755a-b508-4e78-82d8-3a52159f9879 00:26:08.655 13:10:20 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:26:09.221 13:10:21 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:09.478 00:26:09.478 real 0m20.834s 00:26:09.478 user 0m20.000s 00:26:09.478 sys 0m2.597s 00:26:09.478 13:10:21 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:09.478 13:10:21 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.478 ************************************ 00:26:09.478 END TEST lvs_grow_clean 00:26:09.478 ************************************ 00:26:09.736 13:10:21 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:26:09.736 13:10:21 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:26:09.736 13:10:21 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:09.736 13:10:21 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.736 13:10:21 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:09.736 ************************************ 00:26:09.736 START TEST lvs_grow_dirty 00:26:09.736 ************************************ 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:09.736 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:09.994 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:26:09.994 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:26:10.560 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:10.560 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:10.560 13:10:22 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:26:10.818 13:10:23 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:26:10.818 13:10:23 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:26:10.818 13:10:23 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a15fba0-d471-4529-b474-bf865cb6be7a lvol 150 00:26:11.397 13:10:23 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8b28bf2a-e20f-4251-b659-cbe0ca748f09 00:26:11.397 13:10:23 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:11.397 13:10:23 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:26:11.656 [2024-07-15 13:10:24.025301] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:26:11.656 [2024-07-15 13:10:24.025439] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:26:11.656 true 00:26:11.656 13:10:24 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:11.656 13:10:24 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:26:12.319 13:10:24 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:26:12.319 13:10:24 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:12.582 13:10:24 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b28bf2a-e20f-4251-b659-cbe0ca748f09 00:26:13.147 13:10:25 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.405 [2024-07-15 13:10:25.816075] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.405 13:10:25 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=106132 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 106132 /var/tmp/bdevperf.sock 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 106132 ']' 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.973 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:13.973 [2024-07-15 13:10:26.365493] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:13.973 [2024-07-15 13:10:26.365619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106132 ] 00:26:14.231 [2024-07-15 13:10:26.505649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.231 [2024-07-15 13:10:26.594485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.490 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.490 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:26:14.490 13:10:26 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:26:14.754 Nvme0n1 00:26:14.754 13:10:27 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:26:15.322 [ 00:26:15.322 { 00:26:15.322 "aliases": [ 00:26:15.322 "8b28bf2a-e20f-4251-b659-cbe0ca748f09" 00:26:15.322 ], 00:26:15.322 "assigned_rate_limits": { 00:26:15.322 "r_mbytes_per_sec": 0, 00:26:15.322 "rw_ios_per_sec": 0, 00:26:15.322 "rw_mbytes_per_sec": 0, 00:26:15.322 "w_mbytes_per_sec": 0 00:26:15.322 }, 00:26:15.322 "block_size": 4096, 00:26:15.322 "claimed": false, 00:26:15.322 "driver_specific": { 00:26:15.322 "mp_policy": "active_passive", 00:26:15.322 "nvme": [ 00:26:15.322 { 00:26:15.322 "ctrlr_data": { 00:26:15.322 "ana_reporting": false, 00:26:15.322 "cntlid": 1, 00:26:15.322 "firmware_revision": "24.09", 00:26:15.322 "model_number": "SPDK bdev Controller", 00:26:15.322 "multi_ctrlr": true, 00:26:15.322 "oacs": { 00:26:15.322 "firmware": 0, 00:26:15.322 "format": 0, 00:26:15.322 "ns_manage": 0, 00:26:15.322 "security": 0 00:26:15.322 }, 00:26:15.322 "serial_number": "SPDK0", 00:26:15.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:15.322 "vendor_id": "0x8086" 00:26:15.322 }, 00:26:15.322 "ns_data": { 00:26:15.322 "can_share": true, 00:26:15.322 "id": 1 00:26:15.322 }, 00:26:15.322 "trid": { 00:26:15.322 "adrfam": "IPv4", 00:26:15.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:15.322 "traddr": "10.0.0.2", 00:26:15.322 "trsvcid": "4420", 00:26:15.322 "trtype": "TCP" 00:26:15.322 }, 00:26:15.322 "vs": { 00:26:15.322 "nvme_version": "1.3" 00:26:15.322 } 00:26:15.322 } 00:26:15.322 ] 00:26:15.322 }, 00:26:15.322 "memory_domains": [ 00:26:15.322 { 00:26:15.322 "dma_device_id": "system", 00:26:15.322 "dma_device_type": 1 00:26:15.322 } 00:26:15.322 ], 00:26:15.322 "name": "Nvme0n1", 00:26:15.322 "num_blocks": 38912, 00:26:15.322 "product_name": "NVMe disk", 00:26:15.322 "supported_io_types": { 00:26:15.322 "abort": true, 00:26:15.322 "compare": true, 00:26:15.322 "compare_and_write": true, 00:26:15.322 "copy": true, 00:26:15.322 "flush": true, 00:26:15.322 "get_zone_info": false, 00:26:15.322 "nvme_admin": true, 00:26:15.322 "nvme_io": true, 00:26:15.322 "nvme_io_md": false, 00:26:15.322 "nvme_iov_md": false, 00:26:15.322 "read": true, 00:26:15.322 "reset": true, 00:26:15.322 "seek_data": false, 00:26:15.322 "seek_hole": false, 00:26:15.322 "unmap": true, 00:26:15.322 "write": true, 00:26:15.322 "write_zeroes": true, 00:26:15.322 "zcopy": false, 00:26:15.322 "zone_append": false, 00:26:15.322 "zone_management": false 00:26:15.322 }, 00:26:15.322 "uuid": "8b28bf2a-e20f-4251-b659-cbe0ca748f09", 00:26:15.322 "zoned": false 00:26:15.322 } 00:26:15.322 ] 00:26:15.322 13:10:27 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=106166 00:26:15.322 13:10:27 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:15.322 13:10:27 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:26:15.322 Running I/O for 10 seconds... 00:26:16.699 Latency(us) 00:26:16.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:16.699 Nvme0n1 : 1.00 6862.00 26.80 0.00 0.00 0.00 0.00 0.00 00:26:16.699 =================================================================================================================== 00:26:16.699 Total : 6862.00 26.80 0.00 0.00 0.00 0.00 0.00 00:26:16.699 00:26:17.264 13:10:29 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:17.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:17.264 Nvme0n1 : 2.00 7250.50 28.32 0.00 0.00 0.00 0.00 0.00 00:26:17.264 =================================================================================================================== 00:26:17.264 Total : 7250.50 28.32 0.00 0.00 0.00 0.00 0.00 00:26:17.264 00:26:17.830 true 00:26:17.830 13:10:30 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:17.830 13:10:30 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:26:18.087 13:10:30 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:26:18.087 13:10:30 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:26:18.087 13:10:30 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 106166 00:26:18.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:18.344 Nvme0n1 : 3.00 7041.33 27.51 0.00 0.00 0.00 0.00 0.00 00:26:18.344 =================================================================================================================== 00:26:18.344 Total : 7041.33 27.51 0.00 0.00 0.00 0.00 0.00 00:26:18.344 00:26:19.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:19.276 Nvme0n1 : 4.00 7036.75 27.49 0.00 0.00 0.00 0.00 0.00 00:26:19.276 =================================================================================================================== 00:26:19.276 Total : 7036.75 27.49 0.00 0.00 0.00 0.00 0.00 00:26:19.276 00:26:20.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:20.648 Nvme0n1 : 5.00 6681.40 26.10 0.00 0.00 0.00 0.00 0.00 00:26:20.648 =================================================================================================================== 00:26:20.648 Total : 6681.40 26.10 0.00 0.00 0.00 0.00 0.00 00:26:20.648 00:26:21.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:21.286 Nvme0n1 : 6.00 6573.83 25.68 0.00 0.00 0.00 0.00 0.00 00:26:21.286 =================================================================================================================== 00:26:21.286 Total : 6573.83 25.68 0.00 0.00 0.00 0.00 0.00 00:26:21.286 00:26:22.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:22.659 Nvme0n1 : 7.00 6612.86 25.83 0.00 0.00 0.00 0.00 0.00 00:26:22.659 =================================================================================================================== 00:26:22.659 Total : 6612.86 25.83 0.00 0.00 0.00 0.00 0.00 00:26:22.659 00:26:23.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:23.593 Nvme0n1 : 8.00 6559.12 25.62 0.00 0.00 0.00 0.00 0.00 00:26:23.593 =================================================================================================================== 00:26:23.593 Total : 6559.12 25.62 0.00 0.00 0.00 0.00 0.00 00:26:23.593 00:26:24.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:24.527 Nvme0n1 : 9.00 6576.78 25.69 0.00 0.00 0.00 0.00 0.00 00:26:24.527 =================================================================================================================== 00:26:24.527 Total : 6576.78 25.69 0.00 0.00 0.00 0.00 0.00 00:26:24.527 00:26:25.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:25.462 Nvme0n1 : 10.00 6492.30 25.36 0.00 0.00 0.00 0.00 0.00 00:26:25.462 =================================================================================================================== 00:26:25.462 Total : 6492.30 25.36 0.00 0.00 0.00 0.00 0.00 00:26:25.462 00:26:25.462 00:26:25.462 Latency(us) 00:26:25.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:25.462 Nvme0n1 : 10.01 6497.67 25.38 0.00 0.00 19691.96 7119.59 59578.18 00:26:25.462 =================================================================================================================== 00:26:25.462 Total : 6497.67 25.38 0.00 0.00 19691.96 7119.59 59578.18 00:26:25.462 0 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 106132 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 106132 ']' 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 106132 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106132 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:25.462 killing process with pid 106132 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106132' 00:26:25.462 Received shutdown signal, test time was about 10.000000 seconds 00:26:25.462 00:26:25.462 Latency(us) 00:26:25.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.462 =================================================================================================================== 00:26:25.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 106132 00:26:25.462 13:10:37 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 106132 00:26:25.721 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:25.979 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:26.238 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:26.238 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 105533 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 105533 00:26:26.496 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 105533 Killed "${NVMF_APP[@]}" "$@" 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@485 -- # nvmfpid=106320 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@486 -- # waitforlisten 106320 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 106320 ']' 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.496 13:10:38 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:26.755 [2024-07-15 13:10:38.969752] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:26.755 [2024-07-15 13:10:38.971336] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:26.755 [2024-07-15 13:10:38.971422] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.755 [2024-07-15 13:10:39.125691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.755 [2024-07-15 13:10:39.208464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.755 [2024-07-15 13:10:39.208541] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.755 [2024-07-15 13:10:39.208556] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.755 [2024-07-15 13:10:39.208569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.755 [2024-07-15 13:10:39.208580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.755 [2024-07-15 13:10:39.208624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.013 [2024-07-15 13:10:39.256238] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:27.013 [2024-07-15 13:10:39.256549] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:27.578 13:10:39 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.578 13:10:39 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:26:27.578 13:10:39 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:26:27.578 13:10:39 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:27.578 13:10:39 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:27.578 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.578 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:27.835 [2024-07-15 13:10:40.269842] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:27.835 [2024-07-15 13:10:40.270257] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:27.835 [2024-07-15 13:10:40.270411] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:28.167 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:26:28.167 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8b28bf2a-e20f-4251-b659-cbe0ca748f09 00:26:28.167 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8b28bf2a-e20f-4251-b659-cbe0ca748f09 00:26:28.167 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:28.167 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:26:28.167 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:28.167 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:28.167 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:28.167 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b28bf2a-e20f-4251-b659-cbe0ca748f09 -t 2000 00:26:28.733 [ 00:26:28.733 { 00:26:28.733 "aliases": [ 00:26:28.733 "lvs/lvol" 00:26:28.733 ], 00:26:28.733 "assigned_rate_limits": { 00:26:28.733 "r_mbytes_per_sec": 0, 00:26:28.733 "rw_ios_per_sec": 0, 00:26:28.733 "rw_mbytes_per_sec": 0, 00:26:28.733 "w_mbytes_per_sec": 0 00:26:28.733 }, 00:26:28.733 "block_size": 4096, 00:26:28.733 "claimed": false, 00:26:28.733 "driver_specific": { 00:26:28.733 "lvol": { 00:26:28.733 "base_bdev": "aio_bdev", 00:26:28.733 "clone": false, 00:26:28.733 "esnap_clone": false, 00:26:28.733 "lvol_store_uuid": "2a15fba0-d471-4529-b474-bf865cb6be7a", 00:26:28.733 "num_allocated_clusters": 38, 00:26:28.733 "snapshot": false, 00:26:28.733 "thin_provision": false 00:26:28.733 } 00:26:28.733 }, 00:26:28.733 "name": "8b28bf2a-e20f-4251-b659-cbe0ca748f09", 00:26:28.733 "num_blocks": 38912, 00:26:28.733 "product_name": "Logical Volume", 00:26:28.733 "supported_io_types": { 00:26:28.733 "abort": false, 00:26:28.733 "compare": false, 00:26:28.733 "compare_and_write": false, 00:26:28.733 "copy": false, 00:26:28.733 "flush": false, 00:26:28.733 "get_zone_info": false, 00:26:28.733 "nvme_admin": false, 00:26:28.733 "nvme_io": false, 00:26:28.733 "nvme_io_md": false, 00:26:28.733 "nvme_iov_md": false, 00:26:28.733 "read": true, 00:26:28.733 "reset": true, 00:26:28.733 "seek_data": true, 00:26:28.733 "seek_hole": true, 00:26:28.733 "unmap": true, 00:26:28.733 "write": true, 00:26:28.733 "write_zeroes": true, 00:26:28.733 "zcopy": false, 00:26:28.733 "zone_append": false, 00:26:28.733 "zone_management": false 00:26:28.733 }, 00:26:28.733 "uuid": "8b28bf2a-e20f-4251-b659-cbe0ca748f09", 00:26:28.733 "zoned": false 00:26:28.733 } 00:26:28.733 ] 00:26:28.733 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:26:28.733 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:28.733 13:10:40 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:26:28.990 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:26:28.990 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:26:28.990 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:29.247 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:26:29.247 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:26:29.505 [2024-07-15 13:10:41.849458] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:29.505 13:10:41 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:29.764 2024/07/15 13:10:42 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:2a15fba0-d471-4529-b474-bf865cb6be7a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:26:29.764 request: 00:26:29.764 { 00:26:29.764 "method": "bdev_lvol_get_lvstores", 00:26:29.764 "params": { 00:26:29.764 "uuid": "2a15fba0-d471-4529-b474-bf865cb6be7a" 00:26:29.764 } 00:26:29.764 } 00:26:29.764 Got JSON-RPC error response 00:26:29.764 GoRPCClient: error on JSON-RPC call 00:26:29.764 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:26:29.764 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:29.764 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:29.764 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:29.764 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:30.329 aio_bdev 00:26:30.329 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8b28bf2a-e20f-4251-b659-cbe0ca748f09 00:26:30.329 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8b28bf2a-e20f-4251-b659-cbe0ca748f09 00:26:30.329 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:30.329 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:26:30.329 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:30.329 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:30.329 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:30.587 13:10:42 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8b28bf2a-e20f-4251-b659-cbe0ca748f09 -t 2000 00:26:30.846 [ 00:26:30.846 { 00:26:30.846 "aliases": [ 00:26:30.846 "lvs/lvol" 00:26:30.846 ], 00:26:30.846 "assigned_rate_limits": { 00:26:30.846 "r_mbytes_per_sec": 0, 00:26:30.846 "rw_ios_per_sec": 0, 00:26:30.846 "rw_mbytes_per_sec": 0, 00:26:30.846 "w_mbytes_per_sec": 0 00:26:30.846 }, 00:26:30.846 "block_size": 4096, 00:26:30.846 "claimed": false, 00:26:30.846 "driver_specific": { 00:26:30.846 "lvol": { 00:26:30.846 "base_bdev": "aio_bdev", 00:26:30.846 "clone": false, 00:26:30.846 "esnap_clone": false, 00:26:30.846 "lvol_store_uuid": "2a15fba0-d471-4529-b474-bf865cb6be7a", 00:26:30.846 "num_allocated_clusters": 38, 00:26:30.846 "snapshot": false, 00:26:30.846 "thin_provision": false 00:26:30.846 } 00:26:30.846 }, 00:26:30.846 "name": "8b28bf2a-e20f-4251-b659-cbe0ca748f09", 00:26:30.846 "num_blocks": 38912, 00:26:30.846 "product_name": "Logical Volume", 00:26:30.846 "supported_io_types": { 00:26:30.846 "abort": false, 00:26:30.846 "compare": false, 00:26:30.846 "compare_and_write": false, 00:26:30.846 "copy": false, 00:26:30.846 "flush": false, 00:26:30.846 "get_zone_info": false, 00:26:30.846 "nvme_admin": false, 00:26:30.846 "nvme_io": false, 00:26:30.846 "nvme_io_md": false, 00:26:30.846 "nvme_iov_md": false, 00:26:30.846 "read": true, 00:26:30.846 "reset": true, 00:26:30.846 "seek_data": true, 00:26:30.846 "seek_hole": true, 00:26:30.846 "unmap": true, 00:26:30.846 "write": true, 00:26:30.846 "write_zeroes": true, 00:26:30.846 "zcopy": false, 00:26:30.846 "zone_append": false, 00:26:30.846 "zone_management": false 00:26:30.846 }, 00:26:30.846 "uuid": "8b28bf2a-e20f-4251-b659-cbe0ca748f09", 00:26:30.846 "zoned": false 00:26:30.846 } 00:26:30.846 ] 00:26:30.846 13:10:43 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:26:30.846 13:10:43 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:30.846 13:10:43 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:26:31.103 13:10:43 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:26:31.103 13:10:43 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:31.103 13:10:43 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:26:31.359 13:10:43 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:26:31.359 13:10:43 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8b28bf2a-e20f-4251-b659-cbe0ca748f09 00:26:31.641 13:10:43 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a15fba0-d471-4529-b474-bf865cb6be7a 00:26:32.208 13:10:44 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:26:32.208 13:10:44 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:32.775 00:26:32.775 real 0m23.014s 00:26:32.775 user 0m31.931s 00:26:32.775 sys 0m8.623s 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:26:32.775 ************************************ 00:26:32.775 END TEST lvs_grow_dirty 00:26:32.775 ************************************ 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:32.775 nvmf_trace.0 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # nvmfcleanup 00:26:32.775 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.034 rmmod nvme_tcp 00:26:33.034 rmmod nvme_fabrics 00:26:33.034 rmmod nvme_keyring 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # '[' -n 106320 ']' 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # killprocess 106320 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 106320 ']' 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 106320 00:26:33.034 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106320 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:33.292 killing process with pid 106320 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106320' 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 106320 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 106320 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@282 -- # remove_spdk_ns 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:26:33.292 00:26:33.292 real 0m45.770s 00:26:33.292 user 0m53.044s 00:26:33.292 sys 0m11.995s 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:33.292 13:10:45 nvmf_tcp_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:33.292 ************************************ 00:26:33.292 END TEST nvmf_lvs_grow 00:26:33.292 ************************************ 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@54 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:33.550 ************************************ 00:26:33.550 START TEST nvmf_bdev_io_wait 00:26:33.550 ************************************ 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:26:33.550 * Looking for test storage... 00:26:33.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.550 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # prepare_net_devs 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # local -g is_hw=no 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # remove_spdk_ns 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # nvmf_veth_init 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:26:33.551 Cannot find device "nvmf_tgt_br" 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:26:33.551 Cannot find device "nvmf_tgt_br2" 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # true 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:26:33.551 Cannot find device "nvmf_tgt_br" 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:26:33.551 Cannot find device "nvmf_tgt_br2" 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:26:33.551 13:10:45 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:26:33.551 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:33.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:33.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:26:33.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:26:33.809 00:26:33.809 --- 10.0.0.2 ping statistics --- 00:26:33.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.809 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:26:33.809 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:26:33.809 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:33.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:26:33.809 00:26:33.809 --- 10.0.0.3 ping statistics --- 00:26:33.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.810 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:33.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:26:33.810 00:26:33.810 --- 10.0.0.1 ping statistics --- 00:26:33.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.810 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@437 -- # return 0 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:33.810 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # nvmfpid=106736 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # waitforlisten 106736 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 106736 ']' 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:34.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:34.067 13:10:46 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:34.067 [2024-07-15 13:10:46.328672] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:34.067 [2024-07-15 13:10:46.329779] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:34.067 [2024-07-15 13:10:46.329838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.067 [2024-07-15 13:10:46.515848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.325 [2024-07-15 13:10:46.582298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.325 [2024-07-15 13:10:46.582353] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.325 [2024-07-15 13:10:46.582365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.325 [2024-07-15 13:10:46.582374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.325 [2024-07-15 13:10:46.582381] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.325 [2024-07-15 13:10:46.582470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.325 [2024-07-15 13:10:46.582595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.325 [2024-07-15 13:10:46.582648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:34.325 [2024-07-15 13:10:46.582655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.325 [2024-07-15 13:10:46.583185] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:34.957 [2024-07-15 13:10:47.380575] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:34.957 [2024-07-15 13:10:47.380698] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:34.957 [2024-07-15 13:10:47.381900] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:34.957 [2024-07-15 13:10:47.381962] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:34.957 [2024-07-15 13:10:47.387455] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:34.957 Malloc0 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.957 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:35.216 [2024-07-15 13:10:47.443556] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=106790 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=106792 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=106794 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:26:35.216 { 00:26:35.216 "params": { 00:26:35.216 "name": "Nvme$subsystem", 00:26:35.216 "trtype": "$TEST_TRANSPORT", 00:26:35.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.216 "adrfam": "ipv4", 00:26:35.216 "trsvcid": "$NVMF_PORT", 00:26:35.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.216 "hdgst": ${hdgst:-false}, 00:26:35.216 "ddgst": ${ddgst:-false} 00:26:35.216 }, 00:26:35.216 "method": "bdev_nvme_attach_controller" 00:26:35.216 } 00:26:35.216 EOF 00:26:35.216 )") 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:26:35.216 { 00:26:35.216 "params": { 00:26:35.216 "name": "Nvme$subsystem", 00:26:35.216 "trtype": "$TEST_TRANSPORT", 00:26:35.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.216 "adrfam": "ipv4", 00:26:35.216 "trsvcid": "$NVMF_PORT", 00:26:35.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.216 "hdgst": ${hdgst:-false}, 00:26:35.216 "ddgst": ${ddgst:-false} 00:26:35.216 }, 00:26:35.216 "method": "bdev_nvme_attach_controller" 00:26:35.216 } 00:26:35.216 EOF 00:26:35.216 )") 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=106796 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:26:35.216 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:26:35.216 { 00:26:35.216 "params": { 00:26:35.216 "name": "Nvme$subsystem", 00:26:35.216 "trtype": "$TEST_TRANSPORT", 00:26:35.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.216 "adrfam": "ipv4", 00:26:35.217 "trsvcid": "$NVMF_PORT", 00:26:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.217 "hdgst": ${hdgst:-false}, 00:26:35.217 "ddgst": ${ddgst:-false} 00:26:35.217 }, 00:26:35.217 "method": "bdev_nvme_attach_controller" 00:26:35.217 } 00:26:35.217 EOF 00:26:35.217 )") 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:26:35.217 { 00:26:35.217 "params": { 00:26:35.217 "name": "Nvme$subsystem", 00:26:35.217 "trtype": "$TEST_TRANSPORT", 00:26:35.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.217 "adrfam": "ipv4", 00:26:35.217 "trsvcid": "$NVMF_PORT", 00:26:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.217 "hdgst": ${hdgst:-false}, 00:26:35.217 "ddgst": ${ddgst:-false} 00:26:35.217 }, 00:26:35.217 "method": "bdev_nvme_attach_controller" 00:26:35.217 } 00:26:35.217 EOF 00:26:35.217 )") 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:26:35.217 "params": { 00:26:35.217 "name": "Nvme1", 00:26:35.217 "trtype": "tcp", 00:26:35.217 "traddr": "10.0.0.2", 00:26:35.217 "adrfam": "ipv4", 00:26:35.217 "trsvcid": "4420", 00:26:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.217 "hdgst": false, 00:26:35.217 "ddgst": false 00:26:35.217 }, 00:26:35.217 "method": "bdev_nvme_attach_controller" 00:26:35.217 }' 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:26:35.217 "params": { 00:26:35.217 "name": "Nvme1", 00:26:35.217 "trtype": "tcp", 00:26:35.217 "traddr": "10.0.0.2", 00:26:35.217 "adrfam": "ipv4", 00:26:35.217 "trsvcid": "4420", 00:26:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.217 "hdgst": false, 00:26:35.217 "ddgst": false 00:26:35.217 }, 00:26:35.217 "method": "bdev_nvme_attach_controller" 00:26:35.217 }' 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:26:35.217 "params": { 00:26:35.217 "name": "Nvme1", 00:26:35.217 "trtype": "tcp", 00:26:35.217 "traddr": "10.0.0.2", 00:26:35.217 "adrfam": "ipv4", 00:26:35.217 "trsvcid": "4420", 00:26:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.217 "hdgst": false, 00:26:35.217 "ddgst": false 00:26:35.217 }, 00:26:35.217 "method": "bdev_nvme_attach_controller" 00:26:35.217 }' 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:26:35.217 "params": { 00:26:35.217 "name": "Nvme1", 00:26:35.217 "trtype": "tcp", 00:26:35.217 "traddr": "10.0.0.2", 00:26:35.217 "adrfam": "ipv4", 00:26:35.217 "trsvcid": "4420", 00:26:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.217 "hdgst": false, 00:26:35.217 "ddgst": false 00:26:35.217 }, 00:26:35.217 "method": "bdev_nvme_attach_controller" 00:26:35.217 }' 00:26:35.217 [2024-07-15 13:10:47.496258] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:35.217 [2024-07-15 13:10:47.496335] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:35.217 13:10:47 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 106790 00:26:35.217 [2024-07-15 13:10:47.517389] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:35.217 [2024-07-15 13:10:47.517491] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:26:35.217 [2024-07-15 13:10:47.518972] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:35.217 [2024-07-15 13:10:47.519043] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:26:35.217 [2024-07-15 13:10:47.565093] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:35.217 [2024-07-15 13:10:47.565831] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:26:35.217 [2024-07-15 13:10:47.667740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.475 [2024-07-15 13:10:47.705502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.475 [2024-07-15 13:10:47.714175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:35.475 [2024-07-15 13:10:47.752615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:35.475 [2024-07-15 13:10:47.772293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.475 [2024-07-15 13:10:47.798890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.475 Running I/O for 1 seconds... 00:26:35.475 [2024-07-15 13:10:47.847384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:35.475 Running I/O for 1 seconds... 00:26:35.475 [2024-07-15 13:10:47.871478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:35.733 Running I/O for 1 seconds... 00:26:35.733 Running I/O for 1 seconds... 00:26:36.665 00:26:36.665 Latency(us) 00:26:36.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.665 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:26:36.665 Nvme1n1 : 1.02 6845.11 26.74 0.00 0.00 18625.83 5272.67 37176.79 00:26:36.665 =================================================================================================================== 00:26:36.665 Total : 6845.11 26.74 0.00 0.00 18625.83 5272.67 37176.79 00:26:36.665 00:26:36.665 Latency(us) 00:26:36.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.665 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:26:36.665 Nvme1n1 : 1.00 187470.95 732.31 0.00 0.00 679.99 283.00 942.08 00:26:36.665 =================================================================================================================== 00:26:36.665 Total : 187470.95 732.31 0.00 0.00 679.99 283.00 942.08 00:26:36.665 00:26:36.665 Latency(us) 00:26:36.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.665 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:26:36.665 Nvme1n1 : 1.00 7198.07 28.12 0.00 0.00 17726.32 4885.41 42181.35 00:26:36.665 =================================================================================================================== 00:26:36.665 Total : 7198.07 28.12 0.00 0.00 17726.32 4885.41 42181.35 00:26:36.665 00:26:36.665 Latency(us) 00:26:36.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.665 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:26:36.665 Nvme1n1 : 1.02 4904.25 19.16 0.00 0.00 25848.07 8757.99 35508.60 00:26:36.665 =================================================================================================================== 00:26:36.665 Total : 4904.25 19.16 0.00 0.00 25848.07 8757.99 35508.60 00:26:36.665 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 106792 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 106794 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 106796 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # nvmfcleanup 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:36.923 rmmod nvme_tcp 00:26:36.923 rmmod nvme_fabrics 00:26:36.923 rmmod nvme_keyring 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # '[' -n 106736 ']' 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # killprocess 106736 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 106736 ']' 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 106736 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106736 00:26:36.923 killing process with pid 106736 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106736' 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 106736 00:26:36.923 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 106736 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@282 -- # remove_spdk_ns 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:26:37.179 ************************************ 00:26:37.179 END TEST nvmf_bdev_io_wait 00:26:37.179 ************************************ 00:26:37.179 00:26:37.179 real 0m3.757s 00:26:37.179 user 0m12.346s 00:26:37.179 sys 0m2.382s 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@55 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.179 13:10:49 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:37.179 ************************************ 00:26:37.180 START TEST nvmf_queue_depth 00:26:37.180 ************************************ 00:26:37.180 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:26:37.437 * Looking for test storage... 00:26:37.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@452 -- # prepare_net_devs 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # local -g is_hw=no 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # remove_spdk_ns 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # nvmf_veth_init 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:26:37.438 Cannot find device "nvmf_tgt_br" 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:26:37.438 Cannot find device "nvmf_tgt_br2" 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # true 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:26:37.438 Cannot find device "nvmf_tgt_br" 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:26:37.438 Cannot find device "nvmf_tgt_br2" 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:37.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:37.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:26:37.438 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:37.696 13:10:49 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:37.696 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:26:37.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:26:37.696 00:26:37.696 --- 10.0.0.2 ping statistics --- 00:26:37.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.696 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:26:37.696 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:26:37.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:37.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:26:37.696 00:26:37.696 --- 10.0.0.3 ping statistics --- 00:26:37.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.696 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:37.696 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:37.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:26:37.697 00:26:37.697 --- 10.0.0.1 ping statistics --- 00:26:37.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.697 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@437 -- # return 0 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@485 -- # nvmfpid=107024 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@486 -- # waitforlisten 107024 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 107024 ']' 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:37.697 13:10:50 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:37.697 [2024-07-15 13:10:50.117505] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:37.697 [2024-07-15 13:10:50.119117] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:37.697 [2024-07-15 13:10:50.119182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.955 [2024-07-15 13:10:50.252925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.955 [2024-07-15 13:10:50.317114] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.955 [2024-07-15 13:10:50.317162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.955 [2024-07-15 13:10:50.317173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.955 [2024-07-15 13:10:50.317182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.955 [2024-07-15 13:10:50.317189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.955 [2024-07-15 13:10:50.317214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.955 [2024-07-15 13:10:50.371698] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:37.955 [2024-07-15 13:10:50.372012] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:38.888 [2024-07-15 13:10:51.165907] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:38.888 Malloc0 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:38.888 [2024-07-15 13:10:51.222077] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=107074 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:38.888 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:26:38.889 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 107074 /var/tmp/bdevperf.sock 00:26:38.889 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 107074 ']' 00:26:38.889 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:38.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:38.889 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.889 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:38.889 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.889 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:38.889 [2024-07-15 13:10:51.287241] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:38.889 [2024-07-15 13:10:51.287350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107074 ] 00:26:39.146 [2024-07-15 13:10:51.427019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.146 [2024-07-15 13:10:51.495591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.146 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.146 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:26:39.146 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:39.146 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.146 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:39.403 NVMe0n1 00:26:39.403 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.403 13:10:51 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:39.403 Running I/O for 10 seconds... 00:26:51.595 00:26:51.595 Latency(us) 00:26:51.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.595 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:26:51.595 Verification LBA range: start 0x0 length 0x4000 00:26:51.595 NVMe0n1 : 10.11 7407.28 28.93 0.00 0.00 137443.40 21209.83 140127.88 00:26:51.595 =================================================================================================================== 00:26:51.595 Total : 7407.28 28.93 0.00 0.00 137443.40 21209.83 140127.88 00:26:51.595 0 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 107074 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 107074 ']' 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 107074 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107074 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.595 killing process with pid 107074 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107074' 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 107074 00:26:51.595 Received shutdown signal, test time was about 10.000000 seconds 00:26:51.595 00:26:51.595 Latency(us) 00:26:51.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.595 =================================================================================================================== 00:26:51.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.595 13:11:01 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 107074 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # nvmfcleanup 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:51.595 rmmod nvme_tcp 00:26:51.595 rmmod nvme_fabrics 00:26:51.595 rmmod nvme_keyring 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:26:51.595 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # '[' -n 107024 ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # killprocess 107024 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 107024 ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 107024 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107024 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:51.596 killing process with pid 107024 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107024' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 107024 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 107024 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@282 -- # remove_spdk_ns 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:26:51.596 00:26:51.596 real 0m12.828s 00:26:51.596 user 0m20.493s 00:26:51.596 sys 0m2.434s 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.596 ************************************ 00:26:51.596 END TEST nvmf_queue_depth 00:26:51.596 ************************************ 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@56 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:51.596 ************************************ 00:26:51.596 START TEST nvmf_target_multipath 00:26:51.596 ************************************ 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:26:51.596 * Looking for test storage... 00:26:51.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@452 -- # prepare_net_devs 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # local -g is_hw=no 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # remove_spdk_ns 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # nvmf_veth_init 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:51.596 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:26:51.597 Cannot find device "nvmf_tgt_br" 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:26:51.597 Cannot find device "nvmf_tgt_br2" 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # true 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:26:51.597 Cannot find device "nvmf_tgt_br" 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:26:51.597 Cannot find device "nvmf_tgt_br2" 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:51.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:51.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:26:51.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:26:51.597 00:26:51.597 --- 10.0.0.2 ping statistics --- 00:26:51.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.597 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:26:51.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:51.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:26:51.597 00:26:51.597 --- 10.0.0.3 ping statistics --- 00:26:51.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.597 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:51.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:26:51.597 00:26:51.597 --- 10.0.0.1 ping statistics --- 00:26:51.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.597 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@437 -- # return 0 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@485 -- # nvmfpid=107378 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@486 -- # waitforlisten 107378 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 107378 ']' 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.597 13:11:02 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:51.597 [2024-07-15 13:11:03.010498] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:51.597 [2024-07-15 13:11:03.012325] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:26:51.597 [2024-07-15 13:11:03.012425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.597 [2024-07-15 13:11:03.154828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.597 [2024-07-15 13:11:03.236889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.597 [2024-07-15 13:11:03.236953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.597 [2024-07-15 13:11:03.236964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.597 [2024-07-15 13:11:03.236973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.597 [2024-07-15 13:11:03.236980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.597 [2024-07-15 13:11:03.237108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.597 [2024-07-15 13:11:03.237222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.597 [2024-07-15 13:11:03.237551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.597 [2024-07-15 13:11:03.237563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.597 [2024-07-15 13:11:03.299730] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:51.597 [2024-07-15 13:11:03.299815] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:51.597 [2024-07-15 13:11:03.299826] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:51.597 [2024-07-15 13:11:03.299841] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:51.597 [2024-07-15 13:11:03.300150] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:51.597 13:11:03 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.597 13:11:03 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:26:51.597 13:11:03 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:26:51.597 13:11:03 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:51.597 13:11:03 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:51.598 13:11:04 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.598 13:11:04 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:51.855 [2024-07-15 13:11:04.294264] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.112 13:11:04 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:52.369 Malloc0 00:26:52.369 13:11:04 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:26:52.625 13:11:05 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.883 13:11:05 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.139 [2024-07-15 13:11:05.598498] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.396 13:11:05 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:53.653 [2024-07-15 13:11:05.870433] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:53.653 13:11:05 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:26:53.653 13:11:06 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:26:53.910 13:11:06 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:26:53.910 13:11:06 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:26:53.910 13:11:06 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.911 13:11:06 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:53.911 13:11:06 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=107517 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:26:55.809 13:11:08 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:26:55.809 [global] 00:26:55.809 thread=1 00:26:55.809 invalidate=1 00:26:55.809 rw=randrw 00:26:55.809 time_based=1 00:26:55.809 runtime=6 00:26:55.809 ioengine=libaio 00:26:55.809 direct=1 00:26:55.809 bs=4096 00:26:55.809 iodepth=128 00:26:55.809 norandommap=0 00:26:55.809 numjobs=1 00:26:55.809 00:26:55.809 verify_dump=1 00:26:55.809 verify_backlog=512 00:26:55.809 verify_state_save=0 00:26:55.809 do_verify=1 00:26:55.809 verify=crc32c-intel 00:26:55.809 [job0] 00:26:55.809 filename=/dev/nvme0n1 00:26:55.809 Could not set queue depth (nvme0n1) 00:26:56.067 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:56.067 fio-3.35 00:26:56.067 Starting 1 thread 00:26:57.014 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:57.272 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:57.530 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:26:57.530 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:26:57.530 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:57.530 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:57.530 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:57.530 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:57.530 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:26:57.530 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:26:57.531 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:57.531 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:57.531 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:57.531 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:57.531 13:11:09 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:58.464 13:11:10 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:58.464 13:11:10 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:58.464 13:11:10 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:58.464 13:11:10 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:59.029 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:59.287 13:11:11 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:27:00.220 13:11:12 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:27:00.220 13:11:12 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:00.220 13:11:12 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:00.220 13:11:12 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 107517 00:27:02.118 00:27:02.118 job0: (groupid=0, jobs=1): err= 0: pid=107538: Mon Jul 15 13:11:14 2024 00:27:02.118 read: IOPS=8833, BW=34.5MiB/s (36.2MB/s)(208MiB/6015msec) 00:27:02.118 slat (usec): min=3, max=14670, avg=65.41, stdev=327.25 00:27:02.118 clat (usec): min=362, max=34322, avg=10021.32, stdev=3157.41 00:27:02.118 lat (usec): min=403, max=34344, avg=10086.73, stdev=3181.83 00:27:02.118 clat percentiles (usec): 00:27:02.118 | 1.00th=[ 4948], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 7898], 00:27:02.118 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[ 9896], 00:27:02.118 | 70.00th=[10552], 80.00th=[11469], 90.00th=[12649], 95.00th=[15401], 00:27:02.118 | 99.00th=[22676], 99.50th=[26608], 99.90th=[29754], 99.95th=[31851], 00:27:02.118 | 99.99th=[33817] 00:27:02.118 bw ( KiB/s): min= 2672, max=24520, per=50.89%, avg=17983.33, stdev=7286.52, samples=12 00:27:02.118 iops : min= 668, max= 6130, avg=4495.83, stdev=1821.63, samples=12 00:27:02.118 write: IOPS=5185, BW=20.3MiB/s (21.2MB/s)(106MiB/5228msec); 0 zone resets 00:27:02.118 slat (usec): min=12, max=3589, avg=78.36, stdev=193.72 00:27:02.118 clat (usec): min=334, max=33499, avg=8498.72, stdev=2795.05 00:27:02.118 lat (usec): min=366, max=33544, avg=8577.08, stdev=2815.98 00:27:02.118 clat percentiles (usec): 00:27:02.118 | 1.00th=[ 3490], 5.00th=[ 5276], 10.00th=[ 6128], 20.00th=[ 6849], 00:27:02.118 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8160], 60.00th=[ 8586], 00:27:02.118 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[12649], 00:27:02.118 | 99.00th=[19530], 99.50th=[20579], 99.90th=[28181], 99.95th=[31065], 00:27:02.118 | 99.99th=[33424] 00:27:02.118 bw ( KiB/s): min= 2624, max=23952, per=86.91%, avg=18028.00, stdev=7102.89, samples=12 00:27:02.118 iops : min= 656, max= 5988, avg=4507.00, stdev=1775.72, samples=12 00:27:02.118 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:27:02.118 lat (msec) : 2=0.15%, 4=0.71%, 10=68.62%, 20=28.76%, 50=1.71% 00:27:02.118 cpu : usr=4.92%, sys=23.58%, ctx=5185, majf=0, minf=121 00:27:02.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:02.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:02.118 issued rwts: total=53133,27110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:02.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:02.118 00:27:02.118 Run status group 0 (all jobs): 00:27:02.118 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=208MiB (218MB), run=6015-6015msec 00:27:02.118 WRITE: bw=20.3MiB/s (21.2MB/s), 20.3MiB/s-20.3MiB/s (21.2MB/s-21.2MB/s), io=106MiB (111MB), run=5228-5228msec 00:27:02.118 00:27:02.118 Disk stats (read/write): 00:27:02.118 nvme0n1: ios=53003/26714, merge=0/0, ticks=493280/207067, in_queue=700347, util=98.62% 00:27:02.118 13:11:14 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:02.375 13:11:14 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:27:02.634 13:11:15 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:27:04.064 13:11:16 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:27:04.064 13:11:16 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:04.064 13:11:16 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:04.064 13:11:16 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:27:04.064 13:11:16 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=107658 00:27:04.064 13:11:16 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:27:04.064 13:11:16 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:27:04.064 [global] 00:27:04.064 thread=1 00:27:04.064 invalidate=1 00:27:04.064 rw=randrw 00:27:04.064 time_based=1 00:27:04.064 runtime=6 00:27:04.064 ioengine=libaio 00:27:04.064 direct=1 00:27:04.064 bs=4096 00:27:04.064 iodepth=128 00:27:04.064 norandommap=0 00:27:04.064 numjobs=1 00:27:04.064 00:27:04.064 verify_dump=1 00:27:04.064 verify_backlog=512 00:27:04.064 verify_state_save=0 00:27:04.064 do_verify=1 00:27:04.064 verify=crc32c-intel 00:27:04.064 [job0] 00:27:04.064 filename=/dev/nvme0n1 00:27:04.064 Could not set queue depth (nvme0n1) 00:27:04.064 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:04.064 fio-3.35 00:27:04.064 Starting 1 thread 00:27:04.998 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:04.998 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:05.255 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:05.256 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:05.256 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:05.256 13:11:17 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:27:06.627 13:11:18 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:27:06.627 13:11:18 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:06.627 13:11:18 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:06.627 13:11:18 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:06.627 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:07.193 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:27:07.193 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:27:07.193 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:07.194 13:11:19 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:27:08.129 13:11:20 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:27:08.129 13:11:20 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:08.129 13:11:20 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:08.129 13:11:20 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 107658 00:27:10.029 00:27:10.029 job0: (groupid=0, jobs=1): err= 0: pid=107685: Mon Jul 15 13:11:22 2024 00:27:10.029 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(242MiB/6016msec) 00:27:10.029 slat (usec): min=3, max=7843, avg=50.70, stdev=270.25 00:27:10.029 clat (usec): min=274, max=35674, avg=8714.51, stdev=3292.10 00:27:10.029 lat (usec): min=290, max=35687, avg=8765.21, stdev=3318.86 00:27:10.029 clat percentiles (usec): 00:27:10.029 | 1.00th=[ 1598], 5.00th=[ 3523], 10.00th=[ 5145], 20.00th=[ 6783], 00:27:10.029 | 30.00th=[ 7504], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[ 8979], 00:27:10.029 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[11994], 95.00th=[13698], 00:27:10.029 | 99.00th=[20579], 99.50th=[20841], 99.90th=[26084], 99.95th=[29492], 00:27:10.029 | 99.99th=[32113] 00:27:10.029 bw ( KiB/s): min= 3688, max=34602, per=51.33%, avg=21178.67, stdev=9199.63, samples=12 00:27:10.029 iops : min= 922, max= 8650, avg=5294.58, stdev=2299.80, samples=12 00:27:10.029 write: IOPS=6215, BW=24.3MiB/s (25.5MB/s)(125MiB/5130msec); 0 zone resets 00:27:10.029 slat (usec): min=5, max=6802, avg=63.99, stdev=139.74 00:27:10.029 clat (usec): min=371, max=31526, avg=6977.96, stdev=2513.42 00:27:10.029 lat (usec): min=443, max=31566, avg=7041.94, stdev=2526.25 00:27:10.029 clat percentiles (usec): 00:27:10.029 | 1.00th=[ 1045], 5.00th=[ 2507], 10.00th=[ 3687], 20.00th=[ 5014], 00:27:10.029 | 30.00th=[ 5932], 40.00th=[ 6587], 50.00th=[ 7111], 60.00th=[ 7570], 00:27:10.029 | 70.00th=[ 8225], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[10552], 00:27:10.029 | 99.00th=[12518], 99.50th=[13698], 99.90th=[21365], 99.95th=[27919], 00:27:10.029 | 99.99th=[30016] 00:27:10.029 bw ( KiB/s): min= 3776, max=33972, per=85.31%, avg=21209.58, stdev=9038.84, samples=12 00:27:10.029 iops : min= 944, max= 8493, avg=5302.33, stdev=2259.66, samples=12 00:27:10.029 lat (usec) : 500=0.02%, 750=0.11%, 1000=0.30% 00:27:10.029 lat (msec) : 2=1.93%, 4=5.56%, 10=71.95%, 20=18.64%, 50=1.50% 00:27:10.029 cpu : usr=6.35%, sys=28.13%, ctx=7556, majf=0, minf=133 00:27:10.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:10.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:10.029 issued rwts: total=62057,31883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:10.029 00:27:10.029 Run status group 0 (all jobs): 00:27:10.029 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=242MiB (254MB), run=6016-6016msec 00:27:10.029 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=125MiB (131MB), run=5130-5130msec 00:27:10.029 00:27:10.029 Disk stats (read/write): 00:27:10.029 nvme0n1: ios=61523/31646, merge=0/0, ticks=486777/193840, in_queue=680617, util=98.71% 00:27:10.029 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:10.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:27:10.029 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:10.029 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:27:10.029 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:10.029 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:10.029 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:10.029 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:10.029 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:27:10.029 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # nvmfcleanup 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:10.599 rmmod nvme_tcp 00:27:10.599 rmmod nvme_fabrics 00:27:10.599 rmmod nvme_keyring 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # '[' -n 107378 ']' 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # killprocess 107378 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 107378 ']' 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 107378 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107378 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107378' 00:27:10.599 killing process with pid 107378 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 107378 00:27:10.599 13:11:22 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 107378 00:27:10.858 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:27:10.858 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:27:10.858 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@282 -- # remove_spdk_ns 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:27:10.859 00:27:10.859 real 0m20.672s 00:27:10.859 user 1m5.671s 00:27:10.859 sys 0m14.265s 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:10.859 ************************************ 00:27:10.859 END TEST nvmf_target_multipath 00:27:10.859 ************************************ 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@57 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:10.859 ************************************ 00:27:10.859 START TEST nvmf_zcopy 00:27:10.859 ************************************ 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:27:10.859 * Looking for test storage... 00:27:10.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@452 -- # prepare_net_devs 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # local -g is_hw=no 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # remove_spdk_ns 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # nvmf_veth_init 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:10.859 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:27:11.118 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:27:11.118 Cannot find device "nvmf_tgt_br" 00:27:11.118 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:27:11.119 Cannot find device "nvmf_tgt_br2" 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # true 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:27:11.119 Cannot find device "nvmf_tgt_br" 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:27:11.119 Cannot find device "nvmf_tgt_br2" 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:11.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:11.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:11.119 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:11.377 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:27:11.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:27:11.378 00:27:11.378 --- 10.0.0.2 ping statistics --- 00:27:11.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.378 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:27:11.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:11.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:27:11.378 00:27:11.378 --- 10.0.0.3 ping statistics --- 00:27:11.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.378 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:11.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:27:11.378 00:27:11.378 --- 10.0.0.1 ping statistics --- 00:27:11.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.378 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@437 -- # return 0 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@485 -- # nvmfpid=107955 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@486 -- # waitforlisten 107955 00:27:11.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 107955 ']' 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.378 13:11:23 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:11.378 [2024-07-15 13:11:23.830395] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:11.378 [2024-07-15 13:11:23.831527] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:27:11.378 [2024-07-15 13:11:23.831609] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.637 [2024-07-15 13:11:23.971974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.637 [2024-07-15 13:11:24.045380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.637 [2024-07-15 13:11:24.045465] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.637 [2024-07-15 13:11:24.045486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.637 [2024-07-15 13:11:24.045501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.637 [2024-07-15 13:11:24.045514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.637 [2024-07-15 13:11:24.045562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.637 [2024-07-15 13:11:24.105217] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:11.637 [2024-07-15 13:11:24.105658] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:11.897 [2024-07-15 13:11:24.186714] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:11.897 [2024-07-15 13:11:24.206736] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:11.897 malloc0 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@536 -- # config=() 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@536 -- # local subsystem config 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:27:11.897 { 00:27:11.897 "params": { 00:27:11.897 "name": "Nvme$subsystem", 00:27:11.897 "trtype": "$TEST_TRANSPORT", 00:27:11.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.897 "adrfam": "ipv4", 00:27:11.897 "trsvcid": "$NVMF_PORT", 00:27:11.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.897 "hdgst": ${hdgst:-false}, 00:27:11.897 "ddgst": ${ddgst:-false} 00:27:11.897 }, 00:27:11.897 "method": "bdev_nvme_attach_controller" 00:27:11.897 } 00:27:11.897 EOF 00:27:11.897 )") 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # cat 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # jq . 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@561 -- # IFS=, 00:27:11.897 13:11:24 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:27:11.897 "params": { 00:27:11.897 "name": "Nvme1", 00:27:11.897 "trtype": "tcp", 00:27:11.897 "traddr": "10.0.0.2", 00:27:11.897 "adrfam": "ipv4", 00:27:11.897 "trsvcid": "4420", 00:27:11.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:11.897 "hdgst": false, 00:27:11.897 "ddgst": false 00:27:11.897 }, 00:27:11.897 "method": "bdev_nvme_attach_controller" 00:27:11.897 }' 00:27:11.897 [2024-07-15 13:11:24.320054] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:27:11.897 [2024-07-15 13:11:24.320191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107987 ] 00:27:12.156 [2024-07-15 13:11:24.496110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.156 [2024-07-15 13:11:24.584850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.414 Running I/O for 10 seconds... 00:27:22.447 00:27:22.447 Latency(us) 00:27:22.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.447 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:27:22.447 Verification LBA range: start 0x0 length 0x1000 00:27:22.447 Nvme1n1 : 10.02 4717.82 36.86 0.00 0.00 27049.40 3842.79 63867.81 00:27:22.447 =================================================================================================================== 00:27:22.447 Total : 4717.82 36.86 0.00 0.00 27049.40 3842.79 63867.81 00:27:22.706 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=108098 00:27:22.706 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:27:22.706 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:27:22.706 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:22.706 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:27:22.706 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@536 -- # config=() 00:27:22.706 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@536 -- # local subsystem config 00:27:22.706 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:27:22.706 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:27:22.706 { 00:27:22.707 "params": { 00:27:22.707 "name": "Nvme$subsystem", 00:27:22.707 "trtype": "$TEST_TRANSPORT", 00:27:22.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.707 "adrfam": "ipv4", 00:27:22.707 "trsvcid": "$NVMF_PORT", 00:27:22.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.707 "hdgst": ${hdgst:-false}, 00:27:22.707 "ddgst": ${ddgst:-false} 00:27:22.707 }, 00:27:22.707 "method": "bdev_nvme_attach_controller" 00:27:22.707 } 00:27:22.707 EOF 00:27:22.707 )") 00:27:22.707 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # cat 00:27:22.707 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # jq . 00:27:22.707 [2024-07-15 13:11:34.954294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:34.954352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@561 -- # IFS=, 00:27:22.707 13:11:34 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:27:22.707 "params": { 00:27:22.707 "name": "Nvme1", 00:27:22.707 "trtype": "tcp", 00:27:22.707 "traddr": "10.0.0.2", 00:27:22.707 "adrfam": "ipv4", 00:27:22.707 "trsvcid": "4420", 00:27:22.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:22.707 "hdgst": false, 00:27:22.707 "ddgst": false 00:27:22.707 }, 00:27:22.707 "method": "bdev_nvme_attach_controller" 00:27:22.707 }' 00:27:22.707 2024/07/15 13:11:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:34.966245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:34.966296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:34.978236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:34.978295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:34.990297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:34.990365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.002240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.002293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 [2024-07-15 13:11:35.002998] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:27:22.707 [2024-07-15 13:11:35.003096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108098 ] 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.014230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.014277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.026229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.026273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.034188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.034225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.042194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.042228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.054216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.054255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.066234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.066281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.078316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.078388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.090231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.090275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.102216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.102256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.110177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.110204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.118174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.118202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.126186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.126219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.134248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.134299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.138723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.707 [2024-07-15 13:11:35.142226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.142285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.154228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.154273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.707 [2024-07-15 13:11:35.166252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.707 [2024-07-15 13:11:35.166306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.707 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.178230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.178276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.190226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.190269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.202241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.202297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.214236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.214287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.226030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.967 [2024-07-15 13:11:35.226210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.226231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.238300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.238370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.250271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.250327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.262240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.262291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.274232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.274281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.286297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.286354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.298214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.298254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.310264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.310313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.322265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.322317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.330216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.330259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.342234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.342280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.354235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.354282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 [2024-07-15 13:11:35.366241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.366302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.967 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.967 Running I/O for 5 seconds... 00:27:22.967 [2024-07-15 13:11:35.381075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.967 [2024-07-15 13:11:35.381140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.968 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.968 [2024-07-15 13:11:35.397122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.968 [2024-07-15 13:11:35.397185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.968 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.968 [2024-07-15 13:11:35.407407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.968 [2024-07-15 13:11:35.407456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.968 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.968 [2024-07-15 13:11:35.422227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.968 [2024-07-15 13:11:35.422291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:22.968 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:22.968 [2024-07-15 13:11:35.431645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:22.968 [2024-07-15 13:11:35.431695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.226 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.446696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.446753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.466534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.466612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.482035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.482111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.491946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.492016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.507857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.507928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.526125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.526190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.535917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.535971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.552354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.552423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.567667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.567727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.586209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.586275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.597323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.597384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.608406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.608474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.623016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.623104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.642244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.642307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.654504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.654578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.668937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.669012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.679961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.680009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.227 [2024-07-15 13:11:35.694345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.227 [2024-07-15 13:11:35.694398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.227 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.707366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.707428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.719691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.719755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.734690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.734760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.749758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.749854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.761390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.761451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.775061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.775127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.790638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.790703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.810332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.810423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.822975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.823041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.839893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.839961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.856101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.856165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.872410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.872474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.883971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.884024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.899399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.899458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.911289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.911347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.927826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.927895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.486 [2024-07-15 13:11:35.940222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.486 [2024-07-15 13:11:35.940280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.486 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:35.955457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:35.955515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:35.975268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:35.975326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:35.991463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:35.991523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.010019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.010084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.022361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.022444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.034425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.034494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.050563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.050636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.072411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.072469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.094816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.094895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.112869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.112928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.123510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.123569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.140204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.140260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.159214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.159279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.173045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.173121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:23.745 [2024-07-15 13:11:36.197826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:23.745 [2024-07-15 13:11:36.197903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:23.745 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.219102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.219185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.232800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.232877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.249501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.249578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.263785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.263857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.279914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.279992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.293931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.293985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.303783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.303828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.319469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.319528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.337590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.337649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.359449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.359513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.003 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.003 [2024-07-15 13:11:36.374352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.003 [2024-07-15 13:11:36.374418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.004 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.004 [2024-07-15 13:11:36.387556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.004 [2024-07-15 13:11:36.387620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.004 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.004 [2024-07-15 13:11:36.403289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.004 [2024-07-15 13:11:36.403361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.004 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.004 [2024-07-15 13:11:36.416215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.004 [2024-07-15 13:11:36.416264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.004 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.004 [2024-07-15 13:11:36.430499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.004 [2024-07-15 13:11:36.430549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.004 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.004 [2024-07-15 13:11:36.439975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.004 [2024-07-15 13:11:36.440020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.004 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.004 [2024-07-15 13:11:36.456311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.004 [2024-07-15 13:11:36.456368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.004 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.261 [2024-07-15 13:11:36.472555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.261 [2024-07-15 13:11:36.472603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.261 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.261 [2024-07-15 13:11:36.488717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.261 [2024-07-15 13:11:36.488787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.261 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.261 [2024-07-15 13:11:36.499946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.499989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.513823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.513876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.525294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.525344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.547013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.547072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.565472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.565524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.576581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.576647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.590807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.590861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.610556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.610610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.629543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.629598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.639384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.639422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.652604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.652659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.663311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.663352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.678594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.678639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.698754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.698841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.262 [2024-07-15 13:11:36.717475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.262 [2024-07-15 13:11:36.717542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.262 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.535 [2024-07-15 13:11:36.739394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.535 [2024-07-15 13:11:36.739452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.535 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.535 [2024-07-15 13:11:36.758660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.535 [2024-07-15 13:11:36.758715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.535 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.535 [2024-07-15 13:11:36.777254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.535 [2024-07-15 13:11:36.777310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.535 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.535 [2024-07-15 13:11:36.787856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.535 [2024-07-15 13:11:36.787902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.535 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.535 [2024-07-15 13:11:36.803573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.535 [2024-07-15 13:11:36.803622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.535 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.535 [2024-07-15 13:11:36.822795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.535 [2024-07-15 13:11:36.822854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.536 [2024-07-15 13:11:36.840202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.536 [2024-07-15 13:11:36.840258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.536 [2024-07-15 13:11:36.856544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.536 [2024-07-15 13:11:36.856608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.536 [2024-07-15 13:11:36.871964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.536 [2024-07-15 13:11:36.872039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.536 [2024-07-15 13:11:36.888317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.536 [2024-07-15 13:11:36.888388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.536 [2024-07-15 13:11:36.904530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.536 [2024-07-15 13:11:36.904590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.536 [2024-07-15 13:11:36.918611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.536 [2024-07-15 13:11:36.918666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.536 [2024-07-15 13:11:36.938391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.536 [2024-07-15 13:11:36.938452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.536 [2024-07-15 13:11:36.959568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.536 [2024-07-15 13:11:36.959629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.536 [2024-07-15 13:11:36.975303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.536 [2024-07-15 13:11:36.975357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.536 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:36.995518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:36.995597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.014287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.014349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.024664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.024720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.038721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.038787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.056204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.056261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.072654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.072713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.087059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.087111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.097219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.097266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.110324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.110381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.121384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.121449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.142172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.142229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.152354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.152399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.165808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.165871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.186467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.186533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.204511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.204571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.226588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.226649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.246881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.246940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.263641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.263708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:24.812 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:24.812 [2024-07-15 13:11:37.276411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:24.812 [2024-07-15 13:11:37.276488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.291028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.291097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.311030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.311097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.325826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.325879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.337459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.337506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.347364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.347409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.363243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.363297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.383707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.383794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.398750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.398818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.418198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.418249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.071 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.071 [2024-07-15 13:11:37.428611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.071 [2024-07-15 13:11:37.428655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.072 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.072 [2024-07-15 13:11:37.442871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.072 [2024-07-15 13:11:37.442918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.072 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.072 [2024-07-15 13:11:37.463928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.072 [2024-07-15 13:11:37.463982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.072 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.072 [2024-07-15 13:11:37.480149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.072 [2024-07-15 13:11:37.480202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.072 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.072 [2024-07-15 13:11:37.496115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.072 [2024-07-15 13:11:37.496160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.072 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.072 [2024-07-15 13:11:37.511905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.072 [2024-07-15 13:11:37.511960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.072 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.072 [2024-07-15 13:11:37.528324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.072 [2024-07-15 13:11:37.528383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.072 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.330 [2024-07-15 13:11:37.543529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.330 [2024-07-15 13:11:37.543596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.330 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.330 [2024-07-15 13:11:37.562963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.330 [2024-07-15 13:11:37.563018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.330 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.330 [2024-07-15 13:11:37.582817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.330 [2024-07-15 13:11:37.582876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.330 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.330 [2024-07-15 13:11:37.599109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.330 [2024-07-15 13:11:37.599160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.330 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.330 [2024-07-15 13:11:37.617934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.617986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.628024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.628081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.642489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.642542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.664179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.664239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.679975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.680036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.696164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.696218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.712132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.712193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.728377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.728434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.743568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.743631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.760984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.761043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.771676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.771725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.331 [2024-07-15 13:11:37.788266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.331 [2024-07-15 13:11:37.788336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.331 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.807104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.807159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.824086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.824156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.840097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.840148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.853818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.853874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.865861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.865907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.880191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.880237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.892347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.892400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.907555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.907612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.923325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.923378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.942398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.942452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.959863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.959914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.973025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.973091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:37.987382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:37.987435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:38.006481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:38.006537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:38.022521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:38.022575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.589 [2024-07-15 13:11:38.043629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.589 [2024-07-15 13:11:38.043692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.589 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.059635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.059698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.074547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.074625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.092224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.092285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.107343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.107398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.127811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.127895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.146663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.146727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.166891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.166952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.188239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.188308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.202530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.202604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.213865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.213923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.225554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.225603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.247582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.247645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.848 [2024-07-15 13:11:38.261690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.848 [2024-07-15 13:11:38.261741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.848 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.849 [2024-07-15 13:11:38.271450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.849 [2024-07-15 13:11:38.271497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.849 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.849 [2024-07-15 13:11:38.287666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.849 [2024-07-15 13:11:38.287736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.849 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:25.849 [2024-07-15 13:11:38.306396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:25.849 [2024-07-15 13:11:38.306479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:25.849 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.107 [2024-07-15 13:11:38.316877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.107 [2024-07-15 13:11:38.316953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.331200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.331260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.349869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.349925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.361178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.361242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.374057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.374119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.384176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.384230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.398571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.398630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.420105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.420169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.433881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.433939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.443747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.443813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.459793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.459850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.474063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.474122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.484799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.484865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.499918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.499969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.524245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.524306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.537870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.537932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.559736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.559829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.108 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.108 [2024-07-15 13:11:38.573377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.108 [2024-07-15 13:11:38.573445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.593040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.593113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.617592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.617690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.633668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.633718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.645680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.645742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.655619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.655664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.670694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.670756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.691008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.691083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.708985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.709061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.725454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.725516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.735078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.366 [2024-07-15 13:11:38.735124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.366 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.366 [2024-07-15 13:11:38.751283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.367 [2024-07-15 13:11:38.751366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.367 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.367 [2024-07-15 13:11:38.767250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.367 [2024-07-15 13:11:38.767307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.367 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.367 [2024-07-15 13:11:38.785819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.367 [2024-07-15 13:11:38.785882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.367 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.367 [2024-07-15 13:11:38.796888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.367 [2024-07-15 13:11:38.796937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.367 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.367 [2024-07-15 13:11:38.811748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.367 [2024-07-15 13:11:38.811818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.367 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.367 [2024-07-15 13:11:38.830571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.367 [2024-07-15 13:11:38.830632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.624 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.624 [2024-07-15 13:11:38.848904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.624 [2024-07-15 13:11:38.848963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.624 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.624 [2024-07-15 13:11:38.862528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.624 [2024-07-15 13:11:38.862577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.624 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.624 [2024-07-15 13:11:38.883054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.624 [2024-07-15 13:11:38.883113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.624 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.624 [2024-07-15 13:11:38.902451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.624 [2024-07-15 13:11:38.902507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.624 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.624 [2024-07-15 13:11:38.912606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.624 [2024-07-15 13:11:38.912653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.624 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.624 [2024-07-15 13:11:38.925955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.624 [2024-07-15 13:11:38.926019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.624 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.624 [2024-07-15 13:11:38.935544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.624 [2024-07-15 13:11:38.935592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.625 [2024-07-15 13:11:38.950315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.625 [2024-07-15 13:11:38.950377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.625 [2024-07-15 13:11:38.959464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.625 [2024-07-15 13:11:38.959515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.625 [2024-07-15 13:11:38.975133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.625 [2024-07-15 13:11:38.975196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.625 [2024-07-15 13:11:38.993555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.625 [2024-07-15 13:11:38.993622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.625 [2024-07-15 13:11:39.014899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.625 [2024-07-15 13:11:39.014964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.625 [2024-07-15 13:11:39.031690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.625 [2024-07-15 13:11:39.031755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.625 [2024-07-15 13:11:39.047695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.625 [2024-07-15 13:11:39.047786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.625 [2024-07-15 13:11:39.067601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.625 [2024-07-15 13:11:39.067675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.625 [2024-07-15 13:11:39.084939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.625 [2024-07-15 13:11:39.085005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.625 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.883 [2024-07-15 13:11:39.099036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.883 [2024-07-15 13:11:39.099096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.883 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.883 [2024-07-15 13:11:39.119339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.883 [2024-07-15 13:11:39.119406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.883 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.883 [2024-07-15 13:11:39.134306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.883 [2024-07-15 13:11:39.134364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.883 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.883 [2024-07-15 13:11:39.144423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.144473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.159855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.159915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.177493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.177552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.187620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.187669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.204053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.204137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.216819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.216893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.233609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.233698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.251912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.252004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.265662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.265733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.281429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.281500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.294999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.295061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.311075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.311149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.325893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.325960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:26.884 [2024-07-15 13:11:39.339787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:26.884 [2024-07-15 13:11:39.339858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:26.884 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.359998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.360098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.374100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.374175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.389900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.389981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.404509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.404594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.418820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.418894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.434059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.434153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.447348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.447430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.463498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.463604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.477844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.477926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.494133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.494229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.508747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.508826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.529430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.529505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.550820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.550911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.564505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.564577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.580448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.580536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.593323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.593381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.143 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.143 [2024-07-15 13:11:39.607208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.143 [2024-07-15 13:11:39.607279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.401 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.401 [2024-07-15 13:11:39.622182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.622260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.637332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.637404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.651359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.651436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.668386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.668470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.690826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.690913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.705720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.705829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.720691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.720802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.741332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.741428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.762952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.763049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.776950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.777035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.792778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.792855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.813570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.813659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.833733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.833855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.846906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.846989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.402 [2024-07-15 13:11:39.864456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.402 [2024-07-15 13:11:39.864539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.402 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:39.877959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:39.878036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:39.894250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:39.894338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:39.908091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:39.908174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:39.924366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:39.924445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:39.945963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:39.946058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:39.959200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:39.959288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:39.975352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:39.975435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:39.999712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:39.999818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:40.013607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:40.013689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:40.032344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:40.032432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:40.045939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:40.046033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:40.058042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:40.058116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:40.068908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:40.068978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:40.080655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:40.080725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:40.092599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:40.092659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:40.103646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:40.103702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.660 [2024-07-15 13:11:40.118672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.660 [2024-07-15 13:11:40.118739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.660 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.138665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.138739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.155814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.155882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.175647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.175721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.188169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.188222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.202800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.202857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.222636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.222702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.239359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.239425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.257861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.257928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.270749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.270858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.286683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.286756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.305970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.306055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.317408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.317486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.337357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.337459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.918 [2024-07-15 13:11:40.359430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.918 [2024-07-15 13:11:40.359513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.918 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.919 [2024-07-15 13:11:40.373600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.919 [2024-07-15 13:11:40.373672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.919 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:27.919 [2024-07-15 13:11:40.382708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:27.919 [2024-07-15 13:11:40.382790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:27.919 00:27:27.919 Latency(us) 00:27:27.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.919 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:27:27.919 Nvme1n1 : 5.01 10380.33 81.10 0.00 0.00 12315.84 3142.75 26810.18 00:27:27.919 =================================================================================================================== 00:27:27.919 Total : 10380.33 81.10 0.00 0.00 12315.84 3142.75 26810.18 00:27:27.919 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.394288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.394359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.406333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.406417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.418281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.418352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.430281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.430356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.442267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.442336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.454224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.454277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.466242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.466298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.478271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.478327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.490235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.490284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.502226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.502273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.514234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.514287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.526255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.526308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.538278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.538341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.550283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.550347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.562301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.562368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 [2024-07-15 13:11:40.574295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:27:28.177 [2024-07-15 13:11:40.574369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:28.177 2024/07/15 13:11:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:28.177 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (108098) - No such process 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 108098 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:28.177 delay0 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.177 13:11:40 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:27:28.435 [2024-07-15 13:11:40.767831] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:36.539 Initializing NVMe Controllers 00:27:36.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:36.539 Initialization complete. Launching workers. 00:27:36.539 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 232, failed: 26922 00:27:36.539 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27008, failed to submit 146 00:27:36.539 success 26936, unsuccess 72, failed 0 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # nvmfcleanup 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.539 rmmod nvme_tcp 00:27:36.539 rmmod nvme_fabrics 00:27:36.539 rmmod nvme_keyring 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # '[' -n 107955 ']' 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # killprocess 107955 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 107955 ']' 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 107955 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107955 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:36.539 killing process with pid 107955 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107955' 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 107955 00:27:36.539 13:11:47 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 107955 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@282 -- # remove_spdk_ns 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:27:36.539 00:27:36.539 real 0m24.906s 00:27:36.539 user 0m37.299s 00:27:36.539 sys 0m8.689s 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.539 ************************************ 00:27:36.539 END TEST nvmf_zcopy 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:27:36.539 ************************************ 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@58 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:36.539 ************************************ 00:27:36.539 START TEST nvmf_nmic 00:27:36.539 ************************************ 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:27:36.539 * Looking for test storage... 00:27:36.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.539 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@452 -- # prepare_net_devs 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # local -g is_hw=no 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # remove_spdk_ns 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # nvmf_veth_init 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:27:36.540 Cannot find device "nvmf_tgt_br" 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:27:36.540 Cannot find device "nvmf_tgt_br2" 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # true 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:27:36.540 Cannot find device "nvmf_tgt_br" 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:27:36.540 Cannot find device "nvmf_tgt_br2" 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:36.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:36.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:36.540 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:27:36.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:27:36.541 00:27:36.541 --- 10.0.0.2 ping statistics --- 00:27:36.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.541 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:27:36.541 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:36.541 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:27:36.541 00:27:36.541 --- 10.0.0.3 ping statistics --- 00:27:36.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.541 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:36.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:27:36.541 00:27:36.541 --- 10.0.0.1 ping statistics --- 00:27:36.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.541 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@437 -- # return 0 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@485 -- # nvmfpid=108409 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@486 -- # waitforlisten 108409 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 108409 ']' 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.541 13:11:48 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:36.541 [2024-07-15 13:11:48.701340] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:36.541 [2024-07-15 13:11:48.702416] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:27:36.541 [2024-07-15 13:11:48.702481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.541 [2024-07-15 13:11:48.835982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.541 [2024-07-15 13:11:48.923415] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.541 [2024-07-15 13:11:48.923491] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.541 [2024-07-15 13:11:48.923508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.541 [2024-07-15 13:11:48.923521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.541 [2024-07-15 13:11:48.923531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.541 [2024-07-15 13:11:48.923649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.541 [2024-07-15 13:11:48.924361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.541 [2024-07-15 13:11:48.924446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.541 [2024-07-15 13:11:48.924432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.541 [2024-07-15 13:11:48.991366] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:36.541 [2024-07-15 13:11:48.991576] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:36.541 [2024-07-15 13:11:48.991714] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:36.541 [2024-07-15 13:11:48.991974] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:36.541 [2024-07-15 13:11:48.991991] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 [2024-07-15 13:11:49.717318] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 Malloc0 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 [2024-07-15 13:11:49.773487] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.475 test case1: single bdev can't be used in multiple subsystems 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 [2024-07-15 13:11:49.797221] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:27:37.475 [2024-07-15 13:11:49.797267] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:27:37.475 [2024-07-15 13:11:49.797280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:37.475 2024/07/15 13:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:37.475 request: 00:27:37.475 { 00:27:37.475 "method": "nvmf_subsystem_add_ns", 00:27:37.475 "params": { 00:27:37.475 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:27:37.475 "namespace": { 00:27:37.475 "bdev_name": "Malloc0", 00:27:37.475 "no_auto_visible": false 00:27:37.475 } 00:27:37.475 } 00:27:37.475 } 00:27:37.475 Got JSON-RPC error response 00:27:37.475 GoRPCClient: error on JSON-RPC call 00:27:37.475 Adding namespace failed - expected result. 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:27:37.475 test case2: host connect to nvmf target in multiple paths 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:37.475 [2024-07-15 13:11:49.809391] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:37.475 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:27:37.733 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:27:37.733 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:27:37.733 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:37.733 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:37.733 13:11:49 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:27:39.630 13:11:51 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:39.630 13:11:51 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:39.630 13:11:51 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:39.630 13:11:51 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:39.630 13:11:51 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:39.630 13:11:51 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:27:39.630 13:11:51 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:27:39.630 [global] 00:27:39.630 thread=1 00:27:39.630 invalidate=1 00:27:39.630 rw=write 00:27:39.630 time_based=1 00:27:39.630 runtime=1 00:27:39.630 ioengine=libaio 00:27:39.630 direct=1 00:27:39.630 bs=4096 00:27:39.630 iodepth=1 00:27:39.630 norandommap=0 00:27:39.630 numjobs=1 00:27:39.630 00:27:39.630 verify_dump=1 00:27:39.630 verify_backlog=512 00:27:39.630 verify_state_save=0 00:27:39.630 do_verify=1 00:27:39.630 verify=crc32c-intel 00:27:39.630 [job0] 00:27:39.630 filename=/dev/nvme0n1 00:27:39.630 Could not set queue depth (nvme0n1) 00:27:39.886 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:39.886 fio-3.35 00:27:39.886 Starting 1 thread 00:27:40.819 00:27:40.819 job0: (groupid=0, jobs=1): err= 0: pid=108519: Mon Jul 15 13:11:53 2024 00:27:40.819 read: IOPS=2838, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:27:40.819 slat (nsec): min=15047, max=66505, avg=20650.42, stdev=5881.84 00:27:40.819 clat (usec): min=130, max=291, avg=171.24, stdev=30.35 00:27:40.819 lat (usec): min=147, max=313, avg=191.89, stdev=30.87 00:27:40.819 clat percentiles (usec): 00:27:40.819 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:27:40.819 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 167], 00:27:40.819 | 70.00th=[ 178], 80.00th=[ 200], 90.00th=[ 221], 95.00th=[ 233], 00:27:40.819 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 281], 99.95th=[ 281], 00:27:40.819 | 99.99th=[ 293] 00:27:40.819 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:27:40.819 slat (usec): min=21, max=164, avg=31.71, stdev=10.41 00:27:40.819 clat (usec): min=79, max=675, avg=112.04, stdev=21.36 00:27:40.819 lat (usec): min=113, max=698, avg=143.76, stdev=24.58 00:27:40.819 clat percentiles (usec): 00:27:40.819 | 1.00th=[ 93], 5.00th=[ 96], 10.00th=[ 97], 20.00th=[ 100], 00:27:40.819 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 111], 00:27:40.819 | 70.00th=[ 116], 80.00th=[ 122], 90.00th=[ 133], 95.00th=[ 147], 00:27:40.819 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 302], 99.95th=[ 523], 00:27:40.819 | 99.99th=[ 676] 00:27:40.819 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:27:40.819 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:27:40.819 lat (usec) : 100=10.59%, 250=88.53%, 500=0.85%, 750=0.03% 00:27:40.819 cpu : usr=3.10%, sys=10.70%, ctx=5914, majf=0, minf=2 00:27:40.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.819 issued rwts: total=2841,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:40.819 00:27:40.819 Run status group 0 (all jobs): 00:27:40.819 READ: bw=11.1MiB/s (11.6MB/s), 11.1MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=11.1MiB (11.6MB), run=1001-1001msec 00:27:40.819 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:27:40.819 00:27:40.819 Disk stats (read/write): 00:27:40.819 nvme0n1: ios=2610/2759, merge=0/0, ticks=472/341, in_queue=813, util=91.48% 00:27:40.819 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:41.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # nvmfcleanup 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.077 rmmod nvme_tcp 00:27:41.077 rmmod nvme_fabrics 00:27:41.077 rmmod nvme_keyring 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # '[' -n 108409 ']' 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # killprocess 108409 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 108409 ']' 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 108409 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108409 00:27:41.077 killing process with pid 108409 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108409' 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 108409 00:27:41.077 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 108409 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@282 -- # remove_spdk_ns 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:27:41.334 00:27:41.334 real 0m5.579s 00:27:41.334 user 0m13.620s 00:27:41.334 sys 0m2.946s 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.334 ************************************ 00:27:41.334 END TEST nvmf_nmic 00:27:41.334 ************************************ 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@59 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.334 13:11:53 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:41.592 ************************************ 00:27:41.592 START TEST nvmf_fio_target 00:27:41.592 ************************************ 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:27:41.592 * Looking for test storage... 00:27:41.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@452 -- # prepare_net_devs 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # local -g is_hw=no 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # remove_spdk_ns 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # nvmf_veth_init 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:27:41.592 Cannot find device "nvmf_tgt_br" 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:27:41.592 Cannot find device "nvmf_tgt_br2" 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # true 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:27:41.592 Cannot find device "nvmf_tgt_br" 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:27:41.592 Cannot find device "nvmf_tgt_br2" 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:27:41.592 13:11:53 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:41.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:41.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:41.592 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:27:41.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:27:41.850 00:27:41.850 --- 10.0.0.2 ping statistics --- 00:27:41.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.850 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:27:41.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:41.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:27:41.850 00:27:41.850 --- 10.0.0.3 ping statistics --- 00:27:41.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.850 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:41.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:27:41.850 00:27:41.850 --- 10.0.0.1 ping statistics --- 00:27:41.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.850 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@437 -- # return 0 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@485 -- # nvmfpid=108691 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@486 -- # waitforlisten 108691 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 108691 ']' 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.850 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:41.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.851 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.851 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:41.851 13:11:54 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.851 [2024-07-15 13:11:54.292046] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:41.851 [2024-07-15 13:11:54.293103] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:27:41.851 [2024-07-15 13:11:54.293170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.108 [2024-07-15 13:11:54.426401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.108 [2024-07-15 13:11:54.514298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.108 [2024-07-15 13:11:54.514387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.108 [2024-07-15 13:11:54.514411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.108 [2024-07-15 13:11:54.514426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.108 [2024-07-15 13:11:54.514438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.108 [2024-07-15 13:11:54.514702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.108 [2024-07-15 13:11:54.515395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.108 [2024-07-15 13:11:54.515471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.108 [2024-07-15 13:11:54.515485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.365 [2024-07-15 13:11:54.586130] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:42.365 [2024-07-15 13:11:54.586610] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:42.365 [2024-07-15 13:11:54.586853] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:42.365 [2024-07-15 13:11:54.587173] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:42.365 [2024-07-15 13:11:54.587230] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:42.929 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.929 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:27:42.929 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:27:42.929 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:42.929 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:27:42.929 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.929 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:43.186 [2024-07-15 13:11:55.536563] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.186 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:43.749 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:27:43.749 13:11:55 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:43.749 13:11:56 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:27:43.749 13:11:56 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:44.313 13:11:56 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:27:44.313 13:11:56 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:44.570 13:11:56 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:27:44.571 13:11:56 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:27:44.827 13:11:57 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:45.085 13:11:57 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:27:45.085 13:11:57 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:45.652 13:11:57 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:27:45.652 13:11:57 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:45.652 13:11:58 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:27:45.652 13:11:58 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:27:46.216 13:11:58 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:46.474 13:11:58 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:27:46.474 13:11:58 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.732 13:11:59 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:27:46.732 13:11:59 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:46.989 13:11:59 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.247 [2024-07-15 13:11:59.633034] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.247 13:11:59 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:27:47.504 13:11:59 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:27:47.761 13:12:00 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:48.019 13:12:00 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:27:48.019 13:12:00 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:27:48.019 13:12:00 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:48.019 13:12:00 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:27:48.019 13:12:00 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:27:48.019 13:12:00 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:27:49.914 13:12:02 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:49.914 13:12:02 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:49.914 13:12:02 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:49.914 13:12:02 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:27:49.914 13:12:02 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:49.914 13:12:02 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:27:49.914 13:12:02 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:27:49.914 [global] 00:27:49.914 thread=1 00:27:49.914 invalidate=1 00:27:49.914 rw=write 00:27:49.914 time_based=1 00:27:49.914 runtime=1 00:27:49.914 ioengine=libaio 00:27:49.914 direct=1 00:27:49.914 bs=4096 00:27:49.914 iodepth=1 00:27:49.914 norandommap=0 00:27:49.914 numjobs=1 00:27:49.914 00:27:49.914 verify_dump=1 00:27:49.914 verify_backlog=512 00:27:49.914 verify_state_save=0 00:27:49.914 do_verify=1 00:27:49.914 verify=crc32c-intel 00:27:49.914 [job0] 00:27:49.914 filename=/dev/nvme0n1 00:27:49.914 [job1] 00:27:49.914 filename=/dev/nvme0n2 00:27:49.914 [job2] 00:27:49.914 filename=/dev/nvme0n3 00:27:49.914 [job3] 00:27:49.914 filename=/dev/nvme0n4 00:27:49.914 Could not set queue depth (nvme0n1) 00:27:49.914 Could not set queue depth (nvme0n2) 00:27:49.914 Could not set queue depth (nvme0n3) 00:27:49.914 Could not set queue depth (nvme0n4) 00:27:50.171 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:50.171 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:50.171 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:50.171 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:50.171 fio-3.35 00:27:50.171 Starting 4 threads 00:27:51.543 00:27:51.543 job0: (groupid=0, jobs=1): err= 0: pid=108986: Mon Jul 15 13:12:03 2024 00:27:51.543 read: IOPS=1184, BW=4739KiB/s (4853kB/s)(4744KiB/1001msec) 00:27:51.543 slat (nsec): min=11743, max=59706, avg=24671.51, stdev=5682.46 00:27:51.543 clat (usec): min=178, max=41098, avg=402.64, stdev=1183.45 00:27:51.543 lat (usec): min=190, max=41124, avg=427.31, stdev=1183.60 00:27:51.543 clat percentiles (usec): 00:27:51.543 | 1.00th=[ 262], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 355], 00:27:51.543 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 379], 00:27:51.543 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 404], 95.00th=[ 412], 00:27:51.543 | 99.00th=[ 453], 99.50th=[ 490], 99.90th=[ 840], 99.95th=[41157], 00:27:51.543 | 99.99th=[41157] 00:27:51.543 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:27:51.543 slat (usec): min=15, max=103, avg=40.57, stdev= 7.25 00:27:51.543 clat (usec): min=134, max=4210, avg=275.48, stdev=113.30 00:27:51.543 lat (usec): min=166, max=4252, avg=316.05, stdev=114.16 00:27:51.543 clat percentiles (usec): 00:27:51.543 | 1.00th=[ 167], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 235], 00:27:51.543 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:27:51.543 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:27:51.543 | 99.00th=[ 375], 99.50th=[ 494], 99.90th=[ 1156], 99.95th=[ 4228], 00:27:51.543 | 99.99th=[ 4228] 00:27:51.543 bw ( KiB/s): min= 5928, max= 5928, per=24.15%, avg=5928.00, stdev= 0.00, samples=1 00:27:51.543 iops : min= 1482, max= 1482, avg=1482.00, stdev= 0.00, samples=1 00:27:51.543 lat (usec) : 250=12.86%, 500=86.70%, 750=0.29%, 1000=0.04% 00:27:51.543 lat (msec) : 2=0.04%, 10=0.04%, 50=0.04% 00:27:51.543 cpu : usr=1.70%, sys=6.80%, ctx=2735, majf=0, minf=8 00:27:51.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.543 issued rwts: total=1186,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:51.543 job1: (groupid=0, jobs=1): err= 0: pid=108990: Mon Jul 15 13:12:03 2024 00:27:51.543 read: IOPS=1185, BW=4743KiB/s (4857kB/s)(4748KiB/1001msec) 00:27:51.543 slat (nsec): min=11901, max=77727, avg=24983.23, stdev=5490.48 00:27:51.543 clat (usec): min=177, max=41108, avg=402.14, stdev=1183.25 00:27:51.543 lat (usec): min=189, max=41134, avg=427.13, stdev=1183.38 00:27:51.543 clat percentiles (usec): 00:27:51.543 | 1.00th=[ 262], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 355], 00:27:51.543 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 379], 00:27:51.543 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 416], 00:27:51.543 | 99.00th=[ 445], 99.50th=[ 482], 99.90th=[ 848], 99.95th=[41157], 00:27:51.543 | 99.99th=[41157] 00:27:51.543 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:27:51.543 slat (usec): min=11, max=4137, avg=42.95, stdev=104.82 00:27:51.543 clat (usec): min=105, max=1153, avg=272.97, stdev=52.83 00:27:51.543 lat (usec): min=132, max=4273, avg=315.92, stdev=114.58 00:27:51.543 clat percentiles (usec): 00:27:51.543 | 1.00th=[ 167], 5.00th=[ 186], 10.00th=[ 200], 20.00th=[ 237], 00:27:51.543 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:27:51.543 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:27:51.543 | 99.00th=[ 363], 99.50th=[ 461], 99.90th=[ 709], 99.95th=[ 1156], 00:27:51.543 | 99.99th=[ 1156] 00:27:51.543 bw ( KiB/s): min= 5931, max= 5931, per=24.16%, avg=5931.00, stdev= 0.00, samples=1 00:27:51.543 iops : min= 1482, max= 1482, avg=1482.00, stdev= 0.00, samples=1 00:27:51.543 lat (usec) : 250=12.74%, 500=86.85%, 750=0.29%, 1000=0.04% 00:27:51.543 lat (msec) : 2=0.04%, 50=0.04% 00:27:51.543 cpu : usr=1.60%, sys=6.70%, ctx=2736, majf=0, minf=11 00:27:51.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.543 issued rwts: total=1187,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:51.543 job2: (groupid=0, jobs=1): err= 0: pid=108991: Mon Jul 15 13:12:03 2024 00:27:51.543 read: IOPS=1102, BW=4412KiB/s (4517kB/s)(4416KiB/1001msec) 00:27:51.543 slat (nsec): min=14943, max=81044, avg=35054.70, stdev=8663.09 00:27:51.543 clat (usec): min=212, max=693, avg=428.89, stdev=63.84 00:27:51.543 lat (usec): min=228, max=726, avg=463.94, stdev=66.09 00:27:51.543 clat percentiles (usec): 00:27:51.543 | 1.00th=[ 251], 5.00th=[ 277], 10.00th=[ 371], 20.00th=[ 404], 00:27:51.543 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 441], 00:27:51.543 | 70.00th=[ 453], 80.00th=[ 465], 90.00th=[ 482], 95.00th=[ 545], 00:27:51.543 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 627], 99.95th=[ 693], 00:27:51.543 | 99.99th=[ 693] 00:27:51.543 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:27:51.543 slat (usec): min=22, max=141, avg=45.94, stdev=12.61 00:27:51.543 clat (usec): min=138, max=493, avg=265.53, stdev=53.52 00:27:51.543 lat (usec): min=184, max=565, avg=311.47, stdev=52.22 00:27:51.543 clat percentiles (usec): 00:27:51.543 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 202], 00:27:51.543 | 30.00th=[ 225], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 289], 00:27:51.543 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 343], 00:27:51.543 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 453], 99.95th=[ 494], 00:27:51.543 | 99.99th=[ 494] 00:27:51.543 bw ( KiB/s): min= 6088, max= 6088, per=24.80%, avg=6088.00, stdev= 0.00, samples=1 00:27:51.543 iops : min= 1522, max= 1522, avg=1522.00, stdev= 0.00, samples=1 00:27:51.543 lat (usec) : 250=21.17%, 500=75.83%, 750=2.99% 00:27:51.543 cpu : usr=2.10%, sys=8.00%, ctx=2640, majf=0, minf=9 00:27:51.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.543 issued rwts: total=1104,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:51.544 job3: (groupid=0, jobs=1): err= 0: pid=108992: Mon Jul 15 13:12:03 2024 00:27:51.544 read: IOPS=1032, BW=4132KiB/s (4231kB/s)(4136KiB/1001msec) 00:27:51.544 slat (nsec): min=17317, max=85096, avg=37795.64, stdev=8426.67 00:27:51.544 clat (usec): min=261, max=3157, avg=454.95, stdev=100.57 00:27:51.544 lat (usec): min=294, max=3198, avg=492.75, stdev=100.59 00:27:51.544 clat percentiles (usec): 00:27:51.544 | 1.00th=[ 351], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 420], 00:27:51.544 | 30.00th=[ 429], 40.00th=[ 437], 50.00th=[ 441], 60.00th=[ 449], 00:27:51.544 | 70.00th=[ 457], 80.00th=[ 474], 90.00th=[ 545], 95.00th=[ 570], 00:27:51.544 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 1012], 99.95th=[ 3163], 00:27:51.544 | 99.99th=[ 3163] 00:27:51.544 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:27:51.544 slat (usec): min=23, max=103, avg=47.16, stdev=10.84 00:27:51.544 clat (usec): min=125, max=532, avg=265.74, stdev=49.76 00:27:51.544 lat (usec): min=204, max=584, avg=312.90, stdev=51.11 00:27:51.544 clat percentiles (usec): 00:27:51.544 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 208], 00:27:51.544 | 30.00th=[ 237], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:27:51.544 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 334], 00:27:51.544 | 99.00th=[ 400], 99.50th=[ 429], 99.90th=[ 465], 99.95th=[ 529], 00:27:51.544 | 99.99th=[ 529] 00:27:51.544 bw ( KiB/s): min= 5808, max= 5808, per=23.66%, avg=5808.00, stdev= 0.00, samples=1 00:27:51.544 iops : min= 1452, max= 1452, avg=1452.00, stdev= 0.00, samples=1 00:27:51.544 lat (usec) : 250=19.65%, 500=75.33%, 750=4.90%, 1000=0.04% 00:27:51.544 lat (msec) : 2=0.04%, 4=0.04% 00:27:51.544 cpu : usr=2.10%, sys=8.30%, ctx=2571, majf=0, minf=7 00:27:51.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.544 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:51.544 00:27:51.544 Run status group 0 (all jobs): 00:27:51.544 READ: bw=17.6MiB/s (18.5MB/s), 4132KiB/s-4743KiB/s (4231kB/s-4857kB/s), io=17.6MiB (18.5MB), run=1001-1001msec 00:27:51.544 WRITE: bw=24.0MiB/s (25.1MB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:27:51.544 00:27:51.544 Disk stats (read/write): 00:27:51.544 nvme0n1: ios=1074/1159, merge=0/0, ticks=447/360, in_queue=807, util=86.87% 00:27:51.544 nvme0n2: ios=1065/1158, merge=0/0, ticks=477/355, in_queue=832, util=88.49% 00:27:51.544 nvme0n3: ios=1049/1082, merge=0/0, ticks=502/310, in_queue=812, util=89.34% 00:27:51.544 nvme0n4: ios=1018/1024, merge=0/0, ticks=474/315, in_queue=789, util=89.46% 00:27:51.544 13:12:03 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:27:51.544 [global] 00:27:51.544 thread=1 00:27:51.544 invalidate=1 00:27:51.544 rw=randwrite 00:27:51.544 time_based=1 00:27:51.544 runtime=1 00:27:51.544 ioengine=libaio 00:27:51.544 direct=1 00:27:51.544 bs=4096 00:27:51.544 iodepth=1 00:27:51.544 norandommap=0 00:27:51.544 numjobs=1 00:27:51.544 00:27:51.544 verify_dump=1 00:27:51.544 verify_backlog=512 00:27:51.544 verify_state_save=0 00:27:51.544 do_verify=1 00:27:51.544 verify=crc32c-intel 00:27:51.544 [job0] 00:27:51.544 filename=/dev/nvme0n1 00:27:51.544 [job1] 00:27:51.544 filename=/dev/nvme0n2 00:27:51.544 [job2] 00:27:51.544 filename=/dev/nvme0n3 00:27:51.544 [job3] 00:27:51.544 filename=/dev/nvme0n4 00:27:51.544 Could not set queue depth (nvme0n1) 00:27:51.544 Could not set queue depth (nvme0n2) 00:27:51.544 Could not set queue depth (nvme0n3) 00:27:51.544 Could not set queue depth (nvme0n4) 00:27:51.544 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:51.544 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:51.544 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:51.544 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:51.544 fio-3.35 00:27:51.544 Starting 4 threads 00:27:52.915 00:27:52.915 job0: (groupid=0, jobs=1): err= 0: pid=109045: Mon Jul 15 13:12:05 2024 00:27:52.915 read: IOPS=1251, BW=5007KiB/s (5127kB/s)(5012KiB/1001msec) 00:27:52.915 slat (usec): min=9, max=123, avg=27.22, stdev=13.66 00:27:52.915 clat (usec): min=243, max=933, avg=383.86, stdev=84.99 00:27:52.915 lat (usec): min=298, max=946, avg=411.08, stdev=85.77 00:27:52.915 clat percentiles (usec): 00:27:52.915 | 1.00th=[ 273], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 330], 00:27:52.915 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:27:52.915 | 70.00th=[ 379], 80.00th=[ 424], 90.00th=[ 529], 95.00th=[ 562], 00:27:52.915 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 758], 99.95th=[ 930], 00:27:52.915 | 99.99th=[ 930] 00:27:52.915 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:27:52.915 slat (usec): min=13, max=122, avg=38.34, stdev=13.40 00:27:52.915 clat (usec): min=95, max=617, avg=272.34, stdev=52.14 00:27:52.915 lat (usec): min=155, max=666, avg=310.68, stdev=49.70 00:27:52.915 clat percentiles (usec): 00:27:52.915 | 1.00th=[ 155], 5.00th=[ 196], 10.00th=[ 215], 20.00th=[ 239], 00:27:52.915 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:27:52.915 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 355], 00:27:52.915 | 99.00th=[ 404], 99.50th=[ 506], 99.90th=[ 578], 99.95th=[ 619], 00:27:52.915 | 99.99th=[ 619] 00:27:52.915 bw ( KiB/s): min= 7792, max= 7792, per=29.30%, avg=7792.00, stdev= 0.00, samples=1 00:27:52.915 iops : min= 1948, max= 1948, avg=1948.00, stdev= 0.00, samples=1 00:27:52.915 lat (usec) : 100=0.04%, 250=17.68%, 500=75.19%, 750=7.03%, 1000=0.07% 00:27:52.915 cpu : usr=1.30%, sys=7.40%, ctx=3335, majf=0, minf=12 00:27:52.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.915 issued rwts: total=1253,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:52.915 job1: (groupid=0, jobs=1): err= 0: pid=109046: Mon Jul 15 13:12:05 2024 00:27:52.915 read: IOPS=1250, BW=5003KiB/s (5123kB/s)(5008KiB/1001msec) 00:27:52.915 slat (usec): min=9, max=110, avg=26.63, stdev=12.16 00:27:52.915 clat (usec): min=266, max=908, avg=383.96, stdev=86.41 00:27:52.915 lat (usec): min=292, max=924, avg=410.59, stdev=84.88 00:27:52.915 clat percentiles (usec): 00:27:52.915 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:27:52.915 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 367], 00:27:52.915 | 70.00th=[ 383], 80.00th=[ 433], 90.00th=[ 529], 95.00th=[ 578], 00:27:52.915 | 99.00th=[ 668], 99.50th=[ 701], 99.90th=[ 807], 99.95th=[ 906], 00:27:52.915 | 99.99th=[ 906] 00:27:52.915 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:27:52.915 slat (usec): min=14, max=291, avg=36.51, stdev=14.86 00:27:52.915 clat (usec): min=3, max=627, avg=274.71, stdev=53.75 00:27:52.915 lat (usec): min=145, max=662, avg=311.22, stdev=49.58 00:27:52.915 clat percentiles (usec): 00:27:52.915 | 1.00th=[ 163], 5.00th=[ 204], 10.00th=[ 219], 20.00th=[ 233], 00:27:52.915 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:27:52.915 | 70.00th=[ 289], 80.00th=[ 322], 90.00th=[ 351], 95.00th=[ 363], 00:27:52.915 | 99.00th=[ 404], 99.50th=[ 515], 99.90th=[ 603], 99.95th=[ 627], 00:27:52.915 | 99.99th=[ 627] 00:27:52.915 bw ( KiB/s): min= 7752, max= 7752, per=29.15%, avg=7752.00, stdev= 0.00, samples=1 00:27:52.915 iops : min= 1938, max= 1938, avg=1938.00, stdev= 0.00, samples=1 00:27:52.915 lat (usec) : 4=0.04%, 250=17.43%, 500=76.26%, 750=6.17%, 1000=0.11% 00:27:52.915 cpu : usr=1.50%, sys=7.00%, ctx=3273, majf=0, minf=9 00:27:52.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.915 issued rwts: total=1252,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:52.916 job2: (groupid=0, jobs=1): err= 0: pid=109047: Mon Jul 15 13:12:05 2024 00:27:52.916 read: IOPS=1792, BW=7169KiB/s (7341kB/s)(7176KiB/1001msec) 00:27:52.916 slat (nsec): min=14303, max=60123, avg=29890.37, stdev=4619.26 00:27:52.916 clat (usec): min=158, max=577, avg=246.34, stdev=46.76 00:27:52.916 lat (usec): min=183, max=607, avg=276.23, stdev=46.97 00:27:52.916 clat percentiles (usec): 00:27:52.916 | 1.00th=[ 184], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 225], 00:27:52.916 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 237], 00:27:52.916 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 277], 95.00th=[ 371], 00:27:52.916 | 99.00th=[ 445], 99.50th=[ 465], 99.90th=[ 553], 99.95th=[ 578], 00:27:52.916 | 99.99th=[ 578] 00:27:52.916 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:27:52.916 slat (usec): min=39, max=121, avg=43.62, stdev= 4.48 00:27:52.916 clat (usec): min=145, max=459, avg=195.96, stdev=52.99 00:27:52.916 lat (usec): min=188, max=505, avg=239.57, stdev=54.30 00:27:52.916 clat percentiles (usec): 00:27:52.916 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:27:52.916 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:27:52.916 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 322], 95.00th=[ 330], 00:27:52.916 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 379], 99.95th=[ 392], 00:27:52.916 | 99.99th=[ 461] 00:27:52.916 bw ( KiB/s): min= 8192, max= 8192, per=30.80%, avg=8192.00, stdev= 0.00, samples=1 00:27:52.916 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:27:52.916 lat (usec) : 250=84.59%, 500=15.36%, 750=0.05% 00:27:52.916 cpu : usr=2.50%, sys=10.80%, ctx=3842, majf=0, minf=15 00:27:52.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.916 issued rwts: total=1794,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:52.916 job3: (groupid=0, jobs=1): err= 0: pid=109048: Mon Jul 15 13:12:05 2024 00:27:52.916 read: IOPS=1252, BW=5011KiB/s (5131kB/s)(5016KiB/1001msec) 00:27:52.916 slat (usec): min=10, max=125, avg=27.44, stdev=11.72 00:27:52.916 clat (usec): min=179, max=960, avg=382.83, stdev=84.85 00:27:52.916 lat (usec): min=211, max=1055, avg=410.26, stdev=85.76 00:27:52.916 clat percentiles (usec): 00:27:52.916 | 1.00th=[ 273], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:27:52.916 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:27:52.916 | 70.00th=[ 375], 80.00th=[ 420], 90.00th=[ 529], 95.00th=[ 570], 00:27:52.916 | 99.00th=[ 668], 99.50th=[ 717], 99.90th=[ 742], 99.95th=[ 963], 00:27:52.916 | 99.99th=[ 963] 00:27:52.916 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:27:52.916 slat (usec): min=16, max=131, avg=38.20, stdev=11.52 00:27:52.916 clat (usec): min=117, max=715, avg=272.81, stdev=50.93 00:27:52.916 lat (usec): min=159, max=753, avg=311.01, stdev=49.88 00:27:52.916 clat percentiles (usec): 00:27:52.916 | 1.00th=[ 167], 5.00th=[ 204], 10.00th=[ 221], 20.00th=[ 235], 00:27:52.916 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:27:52.916 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 355], 00:27:52.916 | 99.00th=[ 388], 99.50th=[ 494], 99.90th=[ 553], 99.95th=[ 717], 00:27:52.916 | 99.99th=[ 717] 00:27:52.916 bw ( KiB/s): min= 7776, max= 7776, per=29.24%, avg=7776.00, stdev= 0.00, samples=1 00:27:52.916 iops : min= 1944, max= 1944, avg=1944.00, stdev= 0.00, samples=1 00:27:52.916 lat (usec) : 250=17.92%, 500=75.56%, 750=6.49%, 1000=0.04% 00:27:52.916 cpu : usr=1.60%, sys=7.10%, ctx=3107, majf=0, minf=9 00:27:52.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.916 issued rwts: total=1254,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:52.916 00:27:52.916 Run status group 0 (all jobs): 00:27:52.916 READ: bw=21.7MiB/s (22.7MB/s), 5003KiB/s-7169KiB/s (5123kB/s-7341kB/s), io=21.7MiB (22.7MB), run=1001-1001msec 00:27:52.916 WRITE: bw=26.0MiB/s (27.2MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:27:52.916 00:27:52.916 Disk stats (read/write): 00:27:52.916 nvme0n1: ios=1074/1484, merge=0/0, ticks=405/409, in_queue=814, util=88.18% 00:27:52.916 nvme0n2: ios=1064/1480, merge=0/0, ticks=366/383, in_queue=749, util=87.54% 00:27:52.916 nvme0n3: ios=1536/1871, merge=0/0, ticks=367/390, in_queue=757, util=88.85% 00:27:52.916 nvme0n4: ios=1024/1481, merge=0/0, ticks=352/408, in_queue=760, util=89.50% 00:27:52.916 13:12:05 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:27:52.916 [global] 00:27:52.916 thread=1 00:27:52.916 invalidate=1 00:27:52.916 rw=write 00:27:52.916 time_based=1 00:27:52.916 runtime=1 00:27:52.916 ioengine=libaio 00:27:52.916 direct=1 00:27:52.916 bs=4096 00:27:52.916 iodepth=128 00:27:52.916 norandommap=0 00:27:52.916 numjobs=1 00:27:52.916 00:27:52.916 verify_dump=1 00:27:52.916 verify_backlog=512 00:27:52.916 verify_state_save=0 00:27:52.916 do_verify=1 00:27:52.916 verify=crc32c-intel 00:27:52.916 [job0] 00:27:52.916 filename=/dev/nvme0n1 00:27:52.916 [job1] 00:27:52.916 filename=/dev/nvme0n2 00:27:52.916 [job2] 00:27:52.916 filename=/dev/nvme0n3 00:27:52.916 [job3] 00:27:52.916 filename=/dev/nvme0n4 00:27:52.916 Could not set queue depth (nvme0n1) 00:27:52.916 Could not set queue depth (nvme0n2) 00:27:52.916 Could not set queue depth (nvme0n3) 00:27:52.916 Could not set queue depth (nvme0n4) 00:27:52.916 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:52.916 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:52.916 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:52.916 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:52.916 fio-3.35 00:27:52.916 Starting 4 threads 00:27:54.309 00:27:54.309 job0: (groupid=0, jobs=1): err= 0: pid=109108: Mon Jul 15 13:12:06 2024 00:27:54.309 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:27:54.309 slat (usec): min=6, max=3252, avg=95.17, stdev=405.76 00:27:54.309 clat (usec): min=8701, max=16901, avg=12530.49, stdev=1615.85 00:27:54.309 lat (usec): min=8815, max=17145, avg=12625.66, stdev=1595.44 00:27:54.309 clat percentiles (usec): 00:27:54.309 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[11076], 20.00th=[11338], 00:27:54.309 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12387], 00:27:54.309 | 70.00th=[13304], 80.00th=[14353], 90.00th=[15008], 95.00th=[15401], 00:27:54.309 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16909], 99.95th=[16909], 00:27:54.309 | 99.99th=[16909] 00:27:54.309 write: IOPS=5401, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1003msec); 0 zone resets 00:27:54.309 slat (usec): min=9, max=3611, avg=87.11, stdev=329.14 00:27:54.309 clat (usec): min=440, max=16529, avg=11545.97, stdev=1814.89 00:27:54.309 lat (usec): min=3217, max=16550, avg=11633.08, stdev=1816.26 00:27:54.309 clat percentiles (usec): 00:27:54.309 | 1.00th=[ 7898], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:27:54.309 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:27:54.309 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14091], 95.00th=[14877], 00:27:54.309 | 99.00th=[15664], 99.50th=[15795], 99.90th=[16450], 99.95th=[16581], 00:27:54.309 | 99.99th=[16581] 00:27:54.309 bw ( KiB/s): min=20048, max=22235, per=32.46%, avg=21141.50, stdev=1546.44, samples=2 00:27:54.309 iops : min= 5012, max= 5558, avg=5285.00, stdev=386.08, samples=2 00:27:54.309 lat (usec) : 500=0.01% 00:27:54.309 lat (msec) : 4=0.27%, 10=12.66%, 20=87.07% 00:27:54.309 cpu : usr=4.09%, sys=15.27%, ctx=625, majf=0, minf=13 00:27:54.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:54.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:54.309 issued rwts: total=5120,5418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:54.309 job1: (groupid=0, jobs=1): err= 0: pid=109109: Mon Jul 15 13:12:06 2024 00:27:54.309 read: IOPS=2585, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1003msec) 00:27:54.309 slat (usec): min=4, max=8056, avg=180.73, stdev=911.08 00:27:54.309 clat (usec): min=538, max=34215, avg=23913.05, stdev=3546.24 00:27:54.309 lat (usec): min=8070, max=34244, avg=24093.78, stdev=3430.60 00:27:54.309 clat percentiles (usec): 00:27:54.309 | 1.00th=[ 8455], 5.00th=[18744], 10.00th=[19268], 20.00th=[22676], 00:27:54.309 | 30.00th=[23462], 40.00th=[23725], 50.00th=[24249], 60.00th=[24249], 00:27:54.309 | 70.00th=[24773], 80.00th=[25560], 90.00th=[26870], 95.00th=[30016], 00:27:54.309 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:27:54.309 | 99.99th=[34341] 00:27:54.309 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:27:54.309 slat (usec): min=11, max=10139, avg=165.10, stdev=773.83 00:27:54.309 clat (usec): min=12474, max=28318, avg=20784.32, stdev=2745.28 00:27:54.309 lat (usec): min=13006, max=29515, avg=20949.42, stdev=2674.86 00:27:54.309 clat percentiles (usec): 00:27:54.309 | 1.00th=[13698], 5.00th=[15795], 10.00th=[16450], 20.00th=[17957], 00:27:54.309 | 30.00th=[19792], 40.00th=[21365], 50.00th=[21890], 60.00th=[22152], 00:27:54.309 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23200], 95.00th=[24249], 00:27:54.309 | 99.00th=[27132], 99.50th=[28181], 99.90th=[28181], 99.95th=[28443], 00:27:54.309 | 99.99th=[28443] 00:27:54.309 bw ( KiB/s): min=11528, max=12288, per=18.28%, avg=11908.00, stdev=537.40, samples=2 00:27:54.309 iops : min= 2882, max= 3072, avg=2977.00, stdev=134.35, samples=2 00:27:54.309 lat (usec) : 750=0.02% 00:27:54.309 lat (msec) : 10=0.56%, 20=22.95%, 50=76.47% 00:27:54.309 cpu : usr=2.20%, sys=9.58%, ctx=185, majf=0, minf=17 00:27:54.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:54.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:54.309 issued rwts: total=2593,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:54.309 job2: (groupid=0, jobs=1): err= 0: pid=109110: Mon Jul 15 13:12:06 2024 00:27:54.309 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:27:54.309 slat (usec): min=3, max=8494, avg=193.68, stdev=900.59 00:27:54.309 clat (usec): min=15404, max=32323, avg=24754.26, stdev=2883.21 00:27:54.309 lat (usec): min=15430, max=36420, avg=24947.93, stdev=2782.76 00:27:54.309 clat percentiles (usec): 00:27:54.309 | 1.00th=[17957], 5.00th=[20055], 10.00th=[21890], 20.00th=[22938], 00:27:54.309 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:27:54.309 | 70.00th=[25560], 80.00th=[26608], 90.00th=[28967], 95.00th=[31065], 00:27:54.309 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32375], 99.95th=[32375], 00:27:54.309 | 99.99th=[32375] 00:27:54.309 write: IOPS=2715, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1002msec); 0 zone resets 00:27:54.309 slat (usec): min=10, max=8458, avg=176.75, stdev=752.66 00:27:54.309 clat (usec): min=575, max=33751, avg=23146.65, stdev=4629.16 00:27:54.309 lat (usec): min=4416, max=33777, avg=23323.40, stdev=4593.30 00:27:54.309 clat percentiles (usec): 00:27:54.309 | 1.00th=[ 5342], 5.00th=[16450], 10.00th=[18220], 20.00th=[21365], 00:27:54.309 | 30.00th=[21627], 40.00th=[22152], 50.00th=[22414], 60.00th=[22938], 00:27:54.309 | 70.00th=[24511], 80.00th=[26870], 90.00th=[29754], 95.00th=[31851], 00:27:54.309 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:27:54.309 | 99.99th=[33817] 00:27:54.309 bw ( KiB/s): min= 8464, max=12288, per=15.93%, avg=10376.00, stdev=2703.98, samples=2 00:27:54.309 iops : min= 2116, max= 3072, avg=2594.00, stdev=675.99, samples=2 00:27:54.309 lat (usec) : 750=0.02% 00:27:54.309 lat (msec) : 10=0.72%, 20=8.82%, 50=90.44% 00:27:54.309 cpu : usr=2.60%, sys=8.69%, ctx=255, majf=0, minf=11 00:27:54.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:54.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:54.309 issued rwts: total=2560,2721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:54.309 job3: (groupid=0, jobs=1): err= 0: pid=109111: Mon Jul 15 13:12:06 2024 00:27:54.309 read: IOPS=4742, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1003msec) 00:27:54.309 slat (usec): min=3, max=4976, avg=99.62, stdev=480.70 00:27:54.309 clat (usec): min=2749, max=18338, avg=13003.42, stdev=1640.11 00:27:54.309 lat (usec): min=2776, max=18361, avg=13103.04, stdev=1659.19 00:27:54.309 clat percentiles (usec): 00:27:54.309 | 1.00th=[ 8225], 5.00th=[10814], 10.00th=[11207], 20.00th=[12125], 00:27:54.309 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:27:54.309 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14877], 95.00th=[15533], 00:27:54.309 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17433], 99.95th=[17957], 00:27:54.309 | 99.99th=[18220] 00:27:54.309 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:27:54.309 slat (usec): min=9, max=4295, avg=94.50, stdev=405.78 00:27:54.309 clat (usec): min=8492, max=19664, avg=12659.40, stdev=1823.67 00:27:54.309 lat (usec): min=8512, max=19704, avg=12753.90, stdev=1817.41 00:27:54.309 clat percentiles (usec): 00:27:54.309 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[11338], 00:27:54.309 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:27:54.309 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14877], 95.00th=[15926], 00:27:54.309 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19530], 99.95th=[19530], 00:27:54.309 | 99.99th=[19792] 00:27:54.309 bw ( KiB/s): min=20480, max=20480, per=31.45%, avg=20480.00, stdev= 0.00, samples=2 00:27:54.309 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:27:54.309 lat (msec) : 4=0.27%, 10=5.99%, 20=93.73% 00:27:54.309 cpu : usr=3.99%, sys=15.67%, ctx=503, majf=0, minf=9 00:27:54.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:54.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:54.309 issued rwts: total=4757,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:54.309 00:27:54.309 Run status group 0 (all jobs): 00:27:54.309 READ: bw=58.5MiB/s (61.4MB/s), 9.98MiB/s-19.9MiB/s (10.5MB/s-20.9MB/s), io=58.7MiB (61.6MB), run=1002-1003msec 00:27:54.309 WRITE: bw=63.6MiB/s (66.7MB/s), 10.6MiB/s-21.1MiB/s (11.1MB/s-22.1MB/s), io=63.8MiB (66.9MB), run=1002-1003msec 00:27:54.309 00:27:54.310 Disk stats (read/write): 00:27:54.310 nvme0n1: ios=4556/4608, merge=0/0, ticks=12911/10802, in_queue=23713, util=88.28% 00:27:54.310 nvme0n2: ios=2242/2560, merge=0/0, ticks=12581/11907, in_queue=24488, util=87.37% 00:27:54.310 nvme0n3: ios=2048/2479, merge=0/0, ticks=12158/12848, in_queue=25006, util=88.89% 00:27:54.310 nvme0n4: ios=4096/4176, merge=0/0, ticks=16589/15134, in_queue=31723, util=89.65% 00:27:54.310 13:12:06 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:27:54.310 [global] 00:27:54.310 thread=1 00:27:54.310 invalidate=1 00:27:54.310 rw=randwrite 00:27:54.310 time_based=1 00:27:54.310 runtime=1 00:27:54.310 ioengine=libaio 00:27:54.310 direct=1 00:27:54.310 bs=4096 00:27:54.310 iodepth=128 00:27:54.310 norandommap=0 00:27:54.310 numjobs=1 00:27:54.310 00:27:54.310 verify_dump=1 00:27:54.310 verify_backlog=512 00:27:54.310 verify_state_save=0 00:27:54.310 do_verify=1 00:27:54.310 verify=crc32c-intel 00:27:54.310 [job0] 00:27:54.310 filename=/dev/nvme0n1 00:27:54.310 [job1] 00:27:54.310 filename=/dev/nvme0n2 00:27:54.310 [job2] 00:27:54.310 filename=/dev/nvme0n3 00:27:54.310 [job3] 00:27:54.310 filename=/dev/nvme0n4 00:27:54.310 Could not set queue depth (nvme0n1) 00:27:54.310 Could not set queue depth (nvme0n2) 00:27:54.310 Could not set queue depth (nvme0n3) 00:27:54.310 Could not set queue depth (nvme0n4) 00:27:54.310 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:54.310 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:54.310 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:54.310 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:54.310 fio-3.35 00:27:54.310 Starting 4 threads 00:27:55.682 00:27:55.682 job0: (groupid=0, jobs=1): err= 0: pid=109165: Mon Jul 15 13:12:07 2024 00:27:55.682 read: IOPS=4300, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1011msec) 00:27:55.682 slat (usec): min=3, max=12311, avg=123.69, stdev=801.29 00:27:55.682 clat (usec): min=4181, max=26795, avg=15034.10, stdev=3889.88 00:27:55.682 lat (usec): min=5166, max=26806, avg=15157.79, stdev=3931.20 00:27:55.682 clat percentiles (usec): 00:27:55.682 | 1.00th=[ 6063], 5.00th=[10159], 10.00th=[11469], 20.00th=[12518], 00:27:55.682 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[14353], 00:27:55.682 | 70.00th=[15664], 80.00th=[18220], 90.00th=[21103], 95.00th=[23725], 00:27:55.682 | 99.00th=[25822], 99.50th=[26346], 99.90th=[26870], 99.95th=[26870], 00:27:55.682 | 99.99th=[26870] 00:27:55.682 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:27:55.682 slat (usec): min=4, max=9778, avg=93.60, stdev=331.89 00:27:55.682 clat (usec): min=3181, max=26772, avg=13603.28, stdev=2912.40 00:27:55.682 lat (usec): min=3206, max=26778, avg=13696.88, stdev=2937.05 00:27:55.682 clat percentiles (usec): 00:27:55.682 | 1.00th=[ 5014], 5.00th=[ 6915], 10.00th=[ 8717], 20.00th=[11731], 00:27:55.682 | 30.00th=[13435], 40.00th=[14615], 50.00th=[15008], 60.00th=[15139], 00:27:55.682 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15533], 95.00th=[15664], 00:27:55.682 | 99.00th=[15926], 99.50th=[17695], 99.90th=[26346], 99.95th=[26346], 00:27:55.682 | 99.99th=[26870] 00:27:55.682 bw ( KiB/s): min=17424, max=19478, per=28.52%, avg=18451.00, stdev=1452.40, samples=2 00:27:55.682 iops : min= 4356, max= 4869, avg=4612.50, stdev=362.75, samples=2 00:27:55.682 lat (msec) : 4=0.10%, 10=8.93%, 20=84.84%, 50=6.13% 00:27:55.682 cpu : usr=3.47%, sys=11.78%, ctx=647, majf=0, minf=3 00:27:55.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:55.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:55.682 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.682 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:55.682 job1: (groupid=0, jobs=1): err= 0: pid=109166: Mon Jul 15 13:12:07 2024 00:27:55.682 read: IOPS=5717, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1007msec) 00:27:55.682 slat (usec): min=4, max=9984, avg=88.37, stdev=572.12 00:27:55.682 clat (usec): min=1265, max=20821, avg=11447.17, stdev=2692.10 00:27:55.682 lat (usec): min=4403, max=20826, avg=11535.54, stdev=2718.32 00:27:55.682 clat percentiles (usec): 00:27:55.682 | 1.00th=[ 5014], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9634], 00:27:55.682 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10945], 60.00th=[11338], 00:27:55.682 | 70.00th=[11994], 80.00th=[12911], 90.00th=[15270], 95.00th=[17433], 00:27:55.682 | 99.00th=[20055], 99.50th=[20055], 99.90th=[20579], 99.95th=[20841], 00:27:55.682 | 99.99th=[20841] 00:27:55.682 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:27:55.682 slat (usec): min=4, max=8691, avg=73.35, stdev=410.12 00:27:55.682 clat (usec): min=3266, max=20751, avg=10039.97, stdev=1923.24 00:27:55.682 lat (usec): min=3286, max=20758, avg=10113.32, stdev=1968.96 00:27:55.682 clat percentiles (usec): 00:27:55.683 | 1.00th=[ 4178], 5.00th=[ 5473], 10.00th=[ 7373], 20.00th=[ 9110], 00:27:55.683 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10683], 60.00th=[10945], 00:27:55.683 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 00:27:55.683 | 99.00th=[12125], 99.50th=[15795], 99.90th=[20055], 99.95th=[20317], 00:27:55.683 | 99.99th=[20841] 00:27:55.683 bw ( KiB/s): min=24560, max=24625, per=38.01%, avg=24592.50, stdev=45.96, samples=2 00:27:55.683 iops : min= 6140, max= 6156, avg=6148.00, stdev=11.31, samples=2 00:27:55.683 lat (msec) : 2=0.01%, 4=0.27%, 10=31.46%, 20=67.86%, 50=0.40% 00:27:55.683 cpu : usr=4.97%, sys=14.21%, ctx=710, majf=0, minf=1 00:27:55.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:27:55.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:55.683 issued rwts: total=5758,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:55.683 job2: (groupid=0, jobs=1): err= 0: pid=109167: Mon Jul 15 13:12:07 2024 00:27:55.683 read: IOPS=2548, BW=9.96MiB/s (10.4MB/s)(10.1MiB/1013msec) 00:27:55.683 slat (usec): min=5, max=23606, avg=198.76, stdev=1284.09 00:27:55.683 clat (usec): min=5554, max=77982, avg=22162.99, stdev=9940.13 00:27:55.683 lat (usec): min=5570, max=77999, avg=22361.74, stdev=10050.71 00:27:55.683 clat percentiles (usec): 00:27:55.683 | 1.00th=[ 7570], 5.00th=[11600], 10.00th=[13566], 20.00th=[14484], 00:27:55.683 | 30.00th=[14877], 40.00th=[18482], 50.00th=[20579], 60.00th=[23462], 00:27:55.683 | 70.00th=[23987], 80.00th=[24511], 90.00th=[34866], 95.00th=[44303], 00:27:55.683 | 99.00th=[59507], 99.50th=[65799], 99.90th=[78119], 99.95th=[78119], 00:27:55.683 | 99.99th=[78119] 00:27:55.683 write: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec); 0 zone resets 00:27:55.683 slat (usec): min=4, max=27877, avg=151.55, stdev=896.20 00:27:55.683 clat (usec): min=3703, max=77926, avg=23157.17, stdev=10448.49 00:27:55.683 lat (usec): min=3736, max=77935, avg=23308.72, stdev=10504.67 00:27:55.683 clat percentiles (usec): 00:27:55.683 | 1.00th=[ 5276], 5.00th=[ 9372], 10.00th=[11338], 20.00th=[13173], 00:27:55.683 | 30.00th=[19792], 40.00th=[23462], 50.00th=[24511], 60.00th=[25035], 00:27:55.683 | 70.00th=[25822], 80.00th=[26870], 90.00th=[31327], 95.00th=[39060], 00:27:55.683 | 99.00th=[66847], 99.50th=[66847], 99.90th=[67634], 99.95th=[78119], 00:27:55.683 | 99.99th=[78119] 00:27:55.683 bw ( KiB/s): min=11520, max=12232, per=18.36%, avg=11876.00, stdev=503.46, samples=2 00:27:55.683 iops : min= 2880, max= 3058, avg=2969.00, stdev=125.87, samples=2 00:27:55.683 lat (msec) : 4=0.11%, 10=4.32%, 20=31.78%, 50=61.16%, 100=2.64% 00:27:55.683 cpu : usr=3.16%, sys=6.42%, ctx=337, majf=0, minf=10 00:27:55.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:55.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:55.683 issued rwts: total=2582,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:55.683 job3: (groupid=0, jobs=1): err= 0: pid=109168: Mon Jul 15 13:12:07 2024 00:27:55.683 read: IOPS=2174, BW=8697KiB/s (8906kB/s)(8732KiB/1004msec) 00:27:55.683 slat (usec): min=3, max=22142, avg=197.46, stdev=1353.61 00:27:55.683 clat (usec): min=3349, max=45801, avg=23531.43, stdev=8168.52 00:27:55.683 lat (usec): min=3375, max=45839, avg=23728.89, stdev=8232.33 00:27:55.683 clat percentiles (usec): 00:27:55.683 | 1.00th=[ 7963], 5.00th=[11731], 10.00th=[13304], 20.00th=[13566], 00:27:55.683 | 30.00th=[19792], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:27:55.683 | 70.00th=[25035], 80.00th=[30278], 90.00th=[34866], 95.00th=[38011], 00:27:55.683 | 99.00th=[43779], 99.50th=[45351], 99.90th=[45351], 99.95th=[45876], 00:27:55.683 | 99.99th=[45876] 00:27:55.683 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:27:55.683 slat (usec): min=4, max=24345, avg=214.62, stdev=1147.94 00:27:55.683 clat (msec): min=5, max=117, avg=29.62, stdev=19.79 00:27:55.683 lat (msec): min=5, max=117, avg=29.84, stdev=19.90 00:27:55.683 clat percentiles (msec): 00:27:55.683 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 23], 00:27:55.683 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 26], 00:27:55.683 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 53], 95.00th=[ 74], 00:27:55.683 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 117], 99.95th=[ 117], 00:27:55.683 | 99.99th=[ 117] 00:27:55.683 bw ( KiB/s): min= 8208, max=12272, per=15.83%, avg=10240.00, stdev=2873.68, samples=2 00:27:55.683 iops : min= 2052, max= 3068, avg=2560.00, stdev=718.42, samples=2 00:27:55.683 lat (msec) : 4=0.21%, 10=3.01%, 20=19.12%, 50=71.79%, 100=4.53% 00:27:55.683 lat (msec) : 250=1.33% 00:27:55.683 cpu : usr=2.99%, sys=5.18%, ctx=387, majf=0, minf=9 00:27:55.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:55.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:55.683 issued rwts: total=2183,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:55.683 00:27:55.683 Run status group 0 (all jobs): 00:27:55.683 READ: bw=57.3MiB/s (60.1MB/s), 8697KiB/s-22.3MiB/s (8906kB/s-23.4MB/s), io=58.1MiB (60.9MB), run=1004-1013msec 00:27:55.683 WRITE: bw=63.2MiB/s (66.2MB/s), 9.96MiB/s-23.8MiB/s (10.4MB/s-25.0MB/s), io=64.0MiB (67.1MB), run=1004-1013msec 00:27:55.683 00:27:55.683 Disk stats (read/write): 00:27:55.683 nvme0n1: ios=3634/3951, merge=0/0, ticks=50532/52131, in_queue=102663, util=88.18% 00:27:55.683 nvme0n2: ios=4815/5120, merge=0/0, ticks=51430/49198, in_queue=100628, util=88.35% 00:27:55.683 nvme0n3: ios=2048/2559, merge=0/0, ticks=44203/56476, in_queue=100679, util=88.74% 00:27:55.683 nvme0n4: ios=2048/2103, merge=0/0, ticks=45872/57503, in_queue=103375, util=89.58% 00:27:55.683 13:12:07 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:27:55.683 13:12:07 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=109177 00:27:55.683 13:12:07 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:27:55.683 13:12:07 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:27:55.683 [global] 00:27:55.683 thread=1 00:27:55.683 invalidate=1 00:27:55.683 rw=read 00:27:55.683 time_based=1 00:27:55.683 runtime=10 00:27:55.683 ioengine=libaio 00:27:55.683 direct=1 00:27:55.683 bs=4096 00:27:55.683 iodepth=1 00:27:55.683 norandommap=1 00:27:55.683 numjobs=1 00:27:55.683 00:27:55.683 [job0] 00:27:55.683 filename=/dev/nvme0n1 00:27:55.683 [job1] 00:27:55.683 filename=/dev/nvme0n2 00:27:55.683 [job2] 00:27:55.683 filename=/dev/nvme0n3 00:27:55.683 [job3] 00:27:55.683 filename=/dev/nvme0n4 00:27:55.683 Could not set queue depth (nvme0n1) 00:27:55.683 Could not set queue depth (nvme0n2) 00:27:55.683 Could not set queue depth (nvme0n3) 00:27:55.683 Could not set queue depth (nvme0n4) 00:27:55.683 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:55.683 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:55.683 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:55.683 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:55.683 fio-3.35 00:27:55.683 Starting 4 threads 00:27:59.029 13:12:10 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:27:59.029 fio: pid=109230, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:27:59.029 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=39886848, buflen=4096 00:27:59.029 13:12:11 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:27:59.287 fio: pid=109229, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:27:59.287 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=64671744, buflen=4096 00:27:59.287 13:12:11 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:59.287 13:12:11 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:27:59.860 fio: pid=109220, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:27:59.860 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=58118144, buflen=4096 00:27:59.860 13:12:12 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:27:59.860 13:12:12 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:28:00.119 fio: pid=109227, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:28:00.119 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=62476288, buflen=4096 00:28:00.119 13:12:12 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:00.119 13:12:12 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:28:00.119 00:28:00.119 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=109220: Mon Jul 15 13:12:12 2024 00:28:00.119 read: IOPS=3679, BW=14.4MiB/s (15.1MB/s)(55.4MiB/3857msec) 00:28:00.119 slat (usec): min=13, max=20435, avg=25.61, stdev=260.77 00:28:00.119 clat (usec): min=104, max=3119, avg=244.16, stdev=52.97 00:28:00.119 lat (usec): min=156, max=20725, avg=269.77, stdev=268.09 00:28:00.119 clat percentiles (usec): 00:28:00.119 | 1.00th=[ 165], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 227], 00:28:00.119 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:28:00.119 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:28:00.119 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 742], 99.95th=[ 1139], 00:28:00.119 | 99.99th=[ 2311] 00:28:00.119 bw ( KiB/s): min=13440, max=15600, per=27.84%, avg=14632.00, stdev=710.97, samples=7 00:28:00.119 iops : min= 3360, max= 3900, avg=3658.00, stdev=177.74, samples=7 00:28:00.119 lat (usec) : 250=63.36%, 500=36.48%, 750=0.05%, 1000=0.04% 00:28:00.119 lat (msec) : 2=0.01%, 4=0.04% 00:28:00.119 cpu : usr=1.48%, sys=6.41%, ctx=14195, majf=0, minf=1 00:28:00.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:00.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.119 issued rwts: total=14190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:00.119 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=109227: Mon Jul 15 13:12:12 2024 00:28:00.119 read: IOPS=3645, BW=14.2MiB/s (14.9MB/s)(59.6MiB/4184msec) 00:28:00.119 slat (usec): min=11, max=14696, avg=23.38, stdev=207.52 00:28:00.119 clat (usec): min=66, max=3459, avg=249.06, stdev=80.86 00:28:00.119 lat (usec): min=149, max=14967, avg=272.44, stdev=222.13 00:28:00.119 clat percentiles (usec): 00:28:00.119 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:28:00.119 | 30.00th=[ 178], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 277], 00:28:00.119 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 367], 00:28:00.119 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 502], 99.95th=[ 570], 00:28:00.119 | 99.99th=[ 2540] 00:28:00.119 bw ( KiB/s): min=12184, max=21472, per=27.42%, avg=14409.12, stdev=3417.30, samples=8 00:28:00.119 iops : min= 3046, max= 5368, avg=3602.25, stdev=854.29, samples=8 00:28:00.119 lat (usec) : 100=0.01%, 250=37.12%, 500=62.76%, 750=0.07%, 1000=0.01% 00:28:00.119 lat (msec) : 2=0.01%, 4=0.02% 00:28:00.119 cpu : usr=1.43%, sys=5.67%, ctx=15272, majf=0, minf=1 00:28:00.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:00.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.119 issued rwts: total=15254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:00.119 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=109229: Mon Jul 15 13:12:12 2024 00:28:00.119 read: IOPS=4480, BW=17.5MiB/s (18.4MB/s)(61.7MiB/3524msec) 00:28:00.119 slat (usec): min=14, max=7804, avg=22.27, stdev=86.27 00:28:00.119 clat (usec): min=3, max=2349, avg=199.05, stdev=54.45 00:28:00.119 lat (usec): min=156, max=7988, avg=221.33, stdev=102.81 00:28:00.119 clat percentiles (usec): 00:28:00.119 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:28:00.119 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 192], 00:28:00.119 | 70.00th=[ 217], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 265], 00:28:00.119 | 99.00th=[ 293], 99.50th=[ 388], 99.90th=[ 766], 99.95th=[ 971], 00:28:00.119 | 99.99th=[ 1926] 00:28:00.119 bw ( KiB/s): min=14952, max=20384, per=33.73%, avg=17726.67, stdev=2269.82, samples=6 00:28:00.119 iops : min= 3738, max= 5096, avg=4431.67, stdev=567.46, samples=6 00:28:00.119 lat (usec) : 4=0.01%, 250=88.69%, 500=11.03%, 750=0.16%, 1000=0.06% 00:28:00.119 lat (msec) : 2=0.04%, 4=0.01% 00:28:00.119 cpu : usr=1.48%, sys=7.86%, ctx=15808, majf=0, minf=1 00:28:00.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:00.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.119 issued rwts: total=15790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:00.119 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=109230: Mon Jul 15 13:12:12 2024 00:28:00.119 read: IOPS=3152, BW=12.3MiB/s (12.9MB/s)(38.0MiB/3089msec) 00:28:00.119 slat (usec): min=12, max=136, avg=18.08, stdev= 5.86 00:28:00.119 clat (usec): min=63, max=1999, avg=296.93, stdev=48.11 00:28:00.119 lat (usec): min=200, max=2021, avg=315.01, stdev=47.88 00:28:00.119 clat percentiles (usec): 00:28:00.119 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 269], 00:28:00.119 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:28:00.119 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 388], 00:28:00.119 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 498], 99.95th=[ 938], 00:28:00.119 | 99.99th=[ 2008] 00:28:00.119 bw ( KiB/s): min=12184, max=13000, per=24.06%, avg=12644.00, stdev=332.70, samples=6 00:28:00.120 iops : min= 3046, max= 3250, avg=3161.00, stdev=83.17, samples=6 00:28:00.120 lat (usec) : 100=0.01%, 250=1.56%, 500=98.33%, 750=0.04%, 1000=0.01% 00:28:00.120 lat (msec) : 2=0.04% 00:28:00.120 cpu : usr=1.20%, sys=5.70%, ctx=9769, majf=0, minf=1 00:28:00.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:00.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.120 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.120 issued rwts: total=9739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:00.120 00:28:00.120 Run status group 0 (all jobs): 00:28:00.120 READ: bw=51.3MiB/s (53.8MB/s), 12.3MiB/s-17.5MiB/s (12.9MB/s-18.4MB/s), io=215MiB (225MB), run=3089-4184msec 00:28:00.120 00:28:00.120 Disk stats (read/write): 00:28:00.120 nvme0n1: ios=14189/0, merge=0/0, ticks=3536/0, in_queue=3536, util=95.07% 00:28:00.120 nvme0n2: ios=14858/0, merge=0/0, ticks=3796/0, in_queue=3796, util=95.63% 00:28:00.120 nvme0n3: ios=15010/0, merge=0/0, ticks=3086/0, in_queue=3086, util=96.59% 00:28:00.120 nvme0n4: ios=9043/0, merge=0/0, ticks=2591/0, in_queue=2591, util=96.78% 00:28:00.378 13:12:12 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:00.378 13:12:12 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:28:00.942 13:12:13 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:00.942 13:12:13 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:28:01.199 13:12:13 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:01.199 13:12:13 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:28:01.457 13:12:13 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:01.457 13:12:13 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 109177 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:01.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:28:01.716 nvmf hotplug test: fio failed as expected 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:28:01.716 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # nvmfcleanup 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.282 rmmod nvme_tcp 00:28:02.282 rmmod nvme_fabrics 00:28:02.282 rmmod nvme_keyring 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # '[' -n 108691 ']' 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # killprocess 108691 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 108691 ']' 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 108691 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108691 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:02.282 killing process with pid 108691 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108691' 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 108691 00:28:02.282 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 108691 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@282 -- # remove_spdk_ns 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:28:02.540 00:28:02.540 real 0m21.016s 00:28:02.540 user 0m59.897s 00:28:02.540 sys 0m15.161s 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:02.540 ************************************ 00:28:02.540 END TEST nvmf_fio_target 00:28:02.540 ************************************ 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:02.540 ************************************ 00:28:02.540 START TEST nvmf_bdevio 00:28:02.540 ************************************ 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:28:02.540 * Looking for test storage... 00:28:02.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:02.540 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@452 -- # prepare_net_devs 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # local -g is_hw=no 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # remove_spdk_ns 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # nvmf_veth_init 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:28:02.541 Cannot find device "nvmf_tgt_br" 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:28:02.541 13:12:14 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:28:02.541 Cannot find device "nvmf_tgt_br2" 00:28:02.541 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # true 00:28:02.541 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:28:02.541 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:28:02.799 Cannot find device "nvmf_tgt_br" 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:28:02.799 Cannot find device "nvmf_tgt_br2" 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:02.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:02.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:28:02.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:28:02.799 00:28:02.799 --- 10.0.0.2 ping statistics --- 00:28:02.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.799 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:28:02.799 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:02.799 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:28:02.799 00:28:02.799 --- 10.0.0.3 ping statistics --- 00:28:02.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.799 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:28:02.799 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:03.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:28:03.057 00:28:03.057 --- 10.0.0.1 ping statistics --- 00:28:03.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.057 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@437 -- # return 0 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@485 -- # nvmfpid=109553 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@486 -- # waitforlisten 109553 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 109553 ']' 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.058 13:12:15 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:03.058 [2024-07-15 13:12:15.368896] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:03.058 [2024-07-15 13:12:15.370489] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:28:03.058 [2024-07-15 13:12:15.370580] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.058 [2024-07-15 13:12:15.512945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:03.315 [2024-07-15 13:12:15.580849] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.315 [2024-07-15 13:12:15.580934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.315 [2024-07-15 13:12:15.580955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.315 [2024-07-15 13:12:15.580968] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.315 [2024-07-15 13:12:15.580979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.316 [2024-07-15 13:12:15.581076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:03.316 [2024-07-15 13:12:15.581172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:03.316 [2024-07-15 13:12:15.581677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:03.316 [2024-07-15 13:12:15.581691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.316 [2024-07-15 13:12:15.641154] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:03.316 [2024-07-15 13:12:15.641362] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:03.316 [2024-07-15 13:12:15.641562] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:03.316 [2024-07-15 13:12:15.642287] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:03.316 [2024-07-15 13:12:15.642855] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:03.881 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.881 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:28:03.881 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:28:03.881 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:03.881 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:04.139 [2024-07-15 13:12:16.374860] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:04.139 Malloc0 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:04.139 [2024-07-15 13:12:16.427098] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@536 -- # config=() 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@536 -- # local subsystem config 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:28:04.139 { 00:28:04.139 "params": { 00:28:04.139 "name": "Nvme$subsystem", 00:28:04.139 "trtype": "$TEST_TRANSPORT", 00:28:04.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.139 "adrfam": "ipv4", 00:28:04.139 "trsvcid": "$NVMF_PORT", 00:28:04.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.139 "hdgst": ${hdgst:-false}, 00:28:04.139 "ddgst": ${ddgst:-false} 00:28:04.139 }, 00:28:04.139 "method": "bdev_nvme_attach_controller" 00:28:04.139 } 00:28:04.139 EOF 00:28:04.139 )") 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # cat 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # jq . 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@561 -- # IFS=, 00:28:04.139 13:12:16 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:28:04.139 "params": { 00:28:04.139 "name": "Nvme1", 00:28:04.139 "trtype": "tcp", 00:28:04.139 "traddr": "10.0.0.2", 00:28:04.139 "adrfam": "ipv4", 00:28:04.139 "trsvcid": "4420", 00:28:04.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:04.139 "hdgst": false, 00:28:04.139 "ddgst": false 00:28:04.139 }, 00:28:04.139 "method": "bdev_nvme_attach_controller" 00:28:04.139 }' 00:28:04.139 [2024-07-15 13:12:16.505816] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:28:04.139 [2024-07-15 13:12:16.505948] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109604 ] 00:28:04.397 [2024-07-15 13:12:16.675407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:04.397 [2024-07-15 13:12:16.774606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.397 [2024-07-15 13:12:16.774723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.397 [2024-07-15 13:12:16.774734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.655 I/O targets: 00:28:04.655 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:28:04.655 00:28:04.655 00:28:04.655 CUnit - A unit testing framework for C - Version 2.1-3 00:28:04.655 http://cunit.sourceforge.net/ 00:28:04.655 00:28:04.655 00:28:04.655 Suite: bdevio tests on: Nvme1n1 00:28:04.655 Test: blockdev write read block ...passed 00:28:04.655 Test: blockdev write zeroes read block ...passed 00:28:04.655 Test: blockdev write zeroes read no split ...passed 00:28:04.655 Test: blockdev write zeroes read split ...passed 00:28:04.655 Test: blockdev write zeroes read split partial ...passed 00:28:04.655 Test: blockdev reset ...[2024-07-15 13:12:17.048390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.655 [2024-07-15 13:12:17.048537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1beb180 (9): Bad file descriptor 00:28:04.655 [2024-07-15 13:12:17.052139] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:04.655 passed 00:28:04.655 Test: blockdev write read 8 blocks ...passed 00:28:04.655 Test: blockdev write read size > 128k ...passed 00:28:04.655 Test: blockdev write read invalid size ...passed 00:28:04.655 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:04.655 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:04.655 Test: blockdev write read max offset ...passed 00:28:04.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:04.913 Test: blockdev writev readv 8 blocks ...passed 00:28:04.913 Test: blockdev writev readv 30 x 1block ...passed 00:28:04.913 Test: blockdev writev readv block ...passed 00:28:04.913 Test: blockdev writev readv size > 128k ...passed 00:28:04.913 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:04.913 Test: blockdev comparev and writev ...[2024-07-15 13:12:17.227113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:04.913 [2024-07-15 13:12:17.227200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.227237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:04.913 [2024-07-15 13:12:17.227268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.227785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:04.913 [2024-07-15 13:12:17.227823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.227857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:04.913 [2024-07-15 13:12:17.227876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.228393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:04.913 [2024-07-15 13:12:17.228448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.228482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:04.913 [2024-07-15 13:12:17.228504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.229016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:04.913 [2024-07-15 13:12:17.229069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.229103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:04.913 [2024-07-15 13:12:17.229123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:04.913 passed 00:28:04.913 Test: blockdev nvme passthru rw ...passed 00:28:04.913 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:12:17.313666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.913 [2024-07-15 13:12:17.313991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.314518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.913 [2024-07-15 13:12:17.314758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.315170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.913 [2024-07-15 13:12:17.315408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:28:04.913 Test: blockdev nvme admin passthru ...qhd:002e p:0 m:0 dnr:0 00:28:04.913 [2024-07-15 13:12:17.315794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.913 [2024-07-15 13:12:17.315837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:04.913 passed 00:28:04.913 Test: blockdev copy ...passed 00:28:04.913 00:28:04.913 Run Summary: Type Total Ran Passed Failed Inactive 00:28:04.913 suites 1 1 n/a 0 0 00:28:04.913 tests 23 23 23 0 0 00:28:04.913 asserts 152 152 152 0 n/a 00:28:04.913 00:28:04.913 Elapsed time = 0.872 seconds 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # nvmfcleanup 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.170 rmmod nvme_tcp 00:28:05.170 rmmod nvme_fabrics 00:28:05.170 rmmod nvme_keyring 00:28:05.170 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # '[' -n 109553 ']' 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # killprocess 109553 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 109553 ']' 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 109553 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109553 00:28:05.428 killing process with pid 109553 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109553' 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 109553 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 109553 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@282 -- # remove_spdk_ns 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.428 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.687 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:28:05.687 ************************************ 00:28:05.687 END TEST nvmf_bdevio 00:28:05.687 ************************************ 00:28:05.687 00:28:05.687 real 0m3.034s 00:28:05.687 user 0m6.892s 00:28:05.687 sys 0m1.257s 00:28:05.687 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:05.687 13:12:17 nvmf_tcp_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:28:05.687 13:12:17 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:28:05.687 13:12:17 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@61 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:28:05.687 13:12:17 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:05.687 13:12:17 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.687 13:12:17 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:05.687 ************************************ 00:28:05.687 START TEST nvmf_auth_target 00:28:05.687 ************************************ 00:28:05.687 13:12:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:28:05.687 * Looking for test storage... 00:28:05.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@452 -- # prepare_net_devs 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@414 -- # local -g is_hw=no 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@416 -- # remove_spdk_ns 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@436 -- # nvmf_veth_init 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:28:05.687 Cannot find device "nvmf_tgt_br" 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:28:05.687 Cannot find device "nvmf_tgt_br2" 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@160 -- # true 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:28:05.687 Cannot find device "nvmf_tgt_br" 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:28:05.687 Cannot find device "nvmf_tgt_br2" 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:28:05.687 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:05.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:05.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:28:05.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:28:05.945 00:28:05.945 --- 10.0.0.2 ping statistics --- 00:28:05.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.945 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:28:05.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:05.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:28:05.945 00:28:05.945 --- 10.0.0.3 ping statistics --- 00:28:05.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.945 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:05.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:28:05.945 00:28:05.945 --- 10.0.0.1 ping statistics --- 00:28:05.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.945 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@437 -- # return 0 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:28:05.945 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.946 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:28:05.946 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:28:05.946 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.946 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:28:05.946 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@485 -- # nvmfpid=109784 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -L nvmf_auth 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@486 -- # waitforlisten 109784 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 109784 ']' 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.203 13:12:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=109827 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=null 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # len=48 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # key=c361dece1a025f1a61a2c057af491ca92497cc55ca48d865 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.11z 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key c361dece1a025f1a61a2c057af491ca92497cc55ca48d865 0 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 c361dece1a025f1a61a2c057af491ca92497cc55ca48d865 0 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # key=c361dece1a025f1a61a2c057af491ca92497cc55ca48d865 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=0 00:28:07.136 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:28:07.394 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.11z 00:28:07.394 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.11z 00:28:07.394 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.11z 00:28:07.394 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:28:07.394 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:28:07.394 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.394 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:28:07.394 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha512 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # len=64 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # key=e13afece421ce6c346b387c61b9efb891258a45e951206113a9dc5c23b0d1fe3 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.oEm 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key e13afece421ce6c346b387c61b9efb891258a45e951206113a9dc5c23b0d1fe3 3 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 e13afece421ce6c346b387c61b9efb891258a45e951206113a9dc5c23b0d1fe3 3 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # key=e13afece421ce6c346b387c61b9efb891258a45e951206113a9dc5c23b0d1fe3 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=3 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.oEm 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.oEm 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.oEm 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha256 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # len=32 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # key=1b8fc3712cb71a507a89421bdc0f6937 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.rVP 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 1b8fc3712cb71a507a89421bdc0f6937 1 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 1b8fc3712cb71a507a89421bdc0f6937 1 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # key=1b8fc3712cb71a507a89421bdc0f6937 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=1 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.rVP 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.rVP 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.rVP 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha384 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # len=48 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # key=e96f2a395105e64ad13ebab34918e791af4fd37edef31118 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.svW 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key e96f2a395105e64ad13ebab34918e791af4fd37edef31118 2 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 e96f2a395105e64ad13ebab34918e791af4fd37edef31118 2 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # key=e96f2a395105e64ad13ebab34918e791af4fd37edef31118 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=2 00:28:07.395 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.svW 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.svW 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.svW 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha384 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # len=48 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # key=43deb22ea201648304404a00142019c2f5fc0b5f95a275a9 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.GHA 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 43deb22ea201648304404a00142019c2f5fc0b5f95a275a9 2 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 43deb22ea201648304404a00142019c2f5fc0b5f95a275a9 2 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # key=43deb22ea201648304404a00142019c2f5fc0b5f95a275a9 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=2 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.GHA 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.GHA 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.GHA 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha256 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # len=32 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # key=801a5d958430b65be7ae21d741933608 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.UT1 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 801a5d958430b65be7ae21d741933608 1 00:28:07.654 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 801a5d958430b65be7ae21d741933608 1 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # key=801a5d958430b65be7ae21d741933608 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=1 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.UT1 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.UT1 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.UT1 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha512 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@730 -- # len=64 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@731 -- # key=f34c8adb89ad288d2861fa83829b8eebb665cdefa3bf196a275470351c22b179 00:28:07.655 13:12:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.ynf 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key f34c8adb89ad288d2861fa83829b8eebb665cdefa3bf196a275470351c22b179 3 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 f34c8adb89ad288d2861fa83829b8eebb665cdefa3bf196a275470351c22b179 3 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # key=f34c8adb89ad288d2861fa83829b8eebb665cdefa3bf196a275470351c22b179 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=3 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.ynf 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.ynf 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ynf 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 109784 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 109784 ']' 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:07.655 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.219 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.219 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:28:08.219 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 109827 /var/tmp/host.sock 00:28:08.219 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 109827 ']' 00:28:08.219 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:28:08.219 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:08.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:28:08.219 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:28:08.219 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:08.219 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.11z 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.11z 00:28:08.477 13:12:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.11z 00:28:09.041 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.oEm ]] 00:28:09.041 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oEm 00:28:09.041 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.041 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.041 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.041 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oEm 00:28:09.041 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oEm 00:28:09.299 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:28:09.299 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rVP 00:28:09.299 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.299 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.299 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.299 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rVP 00:28:09.299 13:12:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rVP 00:28:09.865 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.svW ]] 00:28:09.865 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.svW 00:28:09.865 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.865 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.865 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.865 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.svW 00:28:09.865 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.svW 00:28:10.123 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:28:10.123 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GHA 00:28:10.123 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.123 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.123 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.123 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.GHA 00:28:10.123 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.GHA 00:28:10.688 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.UT1 ]] 00:28:10.688 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UT1 00:28:10.688 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.688 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.688 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.688 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UT1 00:28:10.688 13:12:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UT1 00:28:10.959 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:28:10.959 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ynf 00:28:10.959 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.959 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.959 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.959 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ynf 00:28:10.959 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ynf 00:28:11.219 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:28:11.219 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:28:11.219 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.219 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:11.219 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:11.219 13:12:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.782 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.047 00:28:12.304 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:12.304 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:12.304 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:12.577 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.577 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:12.577 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.577 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:12.577 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.577 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:12.577 { 00:28:12.577 "auth": { 00:28:12.577 "dhgroup": "null", 00:28:12.577 "digest": "sha256", 00:28:12.577 "state": "completed" 00:28:12.577 }, 00:28:12.577 "cntlid": 1, 00:28:12.577 "listen_address": { 00:28:12.577 "adrfam": "IPv4", 00:28:12.577 "traddr": "10.0.0.2", 00:28:12.577 "trsvcid": "4420", 00:28:12.577 "trtype": "TCP" 00:28:12.577 }, 00:28:12.577 "peer_address": { 00:28:12.577 "adrfam": "IPv4", 00:28:12.577 "traddr": "10.0.0.1", 00:28:12.577 "trsvcid": "53732", 00:28:12.577 "trtype": "TCP" 00:28:12.577 }, 00:28:12.577 "qid": 0, 00:28:12.577 "state": "enabled", 00:28:12.577 "thread": "nvmf_tgt_poll_group_000" 00:28:12.577 } 00:28:12.577 ]' 00:28:12.577 13:12:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:12.577 13:12:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:12.577 13:12:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:12.835 13:12:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:12.835 13:12:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:12.835 13:12:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:12.835 13:12:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:12.835 13:12:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:13.092 13:12:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:28:14.024 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:14.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:14.024 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:14.024 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.024 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.024 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.024 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:14.024 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:14.024 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.589 13:12:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.846 00:28:14.846 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:14.846 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:14.846 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:15.411 { 00:28:15.411 "auth": { 00:28:15.411 "dhgroup": "null", 00:28:15.411 "digest": "sha256", 00:28:15.411 "state": "completed" 00:28:15.411 }, 00:28:15.411 "cntlid": 3, 00:28:15.411 "listen_address": { 00:28:15.411 "adrfam": "IPv4", 00:28:15.411 "traddr": "10.0.0.2", 00:28:15.411 "trsvcid": "4420", 00:28:15.411 "trtype": "TCP" 00:28:15.411 }, 00:28:15.411 "peer_address": { 00:28:15.411 "adrfam": "IPv4", 00:28:15.411 "traddr": "10.0.0.1", 00:28:15.411 "trsvcid": "53760", 00:28:15.411 "trtype": "TCP" 00:28:15.411 }, 00:28:15.411 "qid": 0, 00:28:15.411 "state": "enabled", 00:28:15.411 "thread": "nvmf_tgt_poll_group_000" 00:28:15.411 } 00:28:15.411 ]' 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:15.411 13:12:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:15.976 13:12:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:16.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.910 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:17.168 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.168 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.168 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.426 00:28:17.426 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:17.426 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:17.426 13:12:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:17.684 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.684 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:17.684 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.684 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:17.684 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.684 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:17.684 { 00:28:17.684 "auth": { 00:28:17.684 "dhgroup": "null", 00:28:17.684 "digest": "sha256", 00:28:17.684 "state": "completed" 00:28:17.684 }, 00:28:17.684 "cntlid": 5, 00:28:17.684 "listen_address": { 00:28:17.684 "adrfam": "IPv4", 00:28:17.684 "traddr": "10.0.0.2", 00:28:17.684 "trsvcid": "4420", 00:28:17.684 "trtype": "TCP" 00:28:17.684 }, 00:28:17.684 "peer_address": { 00:28:17.684 "adrfam": "IPv4", 00:28:17.684 "traddr": "10.0.0.1", 00:28:17.684 "trsvcid": "53782", 00:28:17.684 "trtype": "TCP" 00:28:17.684 }, 00:28:17.684 "qid": 0, 00:28:17.684 "state": "enabled", 00:28:17.684 "thread": "nvmf_tgt_poll_group_000" 00:28:17.684 } 00:28:17.684 ]' 00:28:17.684 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:17.942 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:17.942 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:17.942 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:17.942 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:17.942 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:17.942 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:17.942 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:18.199 13:12:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:28:19.131 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:19.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:19.131 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:19.131 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.131 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.131 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.131 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:19.131 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:19.131 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:19.696 13:12:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:19.954 00:28:19.955 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:19.955 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:19.955 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:20.212 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.212 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:20.212 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.212 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.212 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.212 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:20.212 { 00:28:20.212 "auth": { 00:28:20.212 "dhgroup": "null", 00:28:20.212 "digest": "sha256", 00:28:20.212 "state": "completed" 00:28:20.212 }, 00:28:20.212 "cntlid": 7, 00:28:20.212 "listen_address": { 00:28:20.212 "adrfam": "IPv4", 00:28:20.212 "traddr": "10.0.0.2", 00:28:20.212 "trsvcid": "4420", 00:28:20.212 "trtype": "TCP" 00:28:20.212 }, 00:28:20.212 "peer_address": { 00:28:20.212 "adrfam": "IPv4", 00:28:20.212 "traddr": "10.0.0.1", 00:28:20.212 "trsvcid": "53816", 00:28:20.212 "trtype": "TCP" 00:28:20.212 }, 00:28:20.212 "qid": 0, 00:28:20.212 "state": "enabled", 00:28:20.212 "thread": "nvmf_tgt_poll_group_000" 00:28:20.212 } 00:28:20.212 ]' 00:28:20.212 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:20.470 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:20.470 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:20.470 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:20.470 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:20.470 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:20.470 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:20.470 13:12:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:21.035 13:12:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:28:21.600 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:21.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:21.600 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:21.600 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.600 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.600 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.600 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.600 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:21.600 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:21.600 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.165 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.422 00:28:22.422 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:22.422 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:22.422 13:12:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:22.680 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.680 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:22.680 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.680 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:22.938 { 00:28:22.938 "auth": { 00:28:22.938 "dhgroup": "ffdhe2048", 00:28:22.938 "digest": "sha256", 00:28:22.938 "state": "completed" 00:28:22.938 }, 00:28:22.938 "cntlid": 9, 00:28:22.938 "listen_address": { 00:28:22.938 "adrfam": "IPv4", 00:28:22.938 "traddr": "10.0.0.2", 00:28:22.938 "trsvcid": "4420", 00:28:22.938 "trtype": "TCP" 00:28:22.938 }, 00:28:22.938 "peer_address": { 00:28:22.938 "adrfam": "IPv4", 00:28:22.938 "traddr": "10.0.0.1", 00:28:22.938 "trsvcid": "60374", 00:28:22.938 "trtype": "TCP" 00:28:22.938 }, 00:28:22.938 "qid": 0, 00:28:22.938 "state": "enabled", 00:28:22.938 "thread": "nvmf_tgt_poll_group_000" 00:28:22.938 } 00:28:22.938 ]' 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:22.938 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:23.503 13:12:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:24.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.436 13:12:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.001 00:28:25.001 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:25.001 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:25.001 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:25.257 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.257 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:25.257 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.257 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:25.514 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.515 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:25.515 { 00:28:25.515 "auth": { 00:28:25.515 "dhgroup": "ffdhe2048", 00:28:25.515 "digest": "sha256", 00:28:25.515 "state": "completed" 00:28:25.515 }, 00:28:25.515 "cntlid": 11, 00:28:25.515 "listen_address": { 00:28:25.515 "adrfam": "IPv4", 00:28:25.515 "traddr": "10.0.0.2", 00:28:25.515 "trsvcid": "4420", 00:28:25.515 "trtype": "TCP" 00:28:25.515 }, 00:28:25.515 "peer_address": { 00:28:25.515 "adrfam": "IPv4", 00:28:25.515 "traddr": "10.0.0.1", 00:28:25.515 "trsvcid": "60396", 00:28:25.515 "trtype": "TCP" 00:28:25.515 }, 00:28:25.515 "qid": 0, 00:28:25.515 "state": "enabled", 00:28:25.515 "thread": "nvmf_tgt_poll_group_000" 00:28:25.515 } 00:28:25.515 ]' 00:28:25.515 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:25.515 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:25.515 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:25.515 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:25.515 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:25.515 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:25.515 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:25.515 13:12:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:26.077 13:12:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:28:26.643 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:26.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:26.643 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:26.643 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.643 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.643 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.643 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:26.643 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:26.643 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.207 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.464 00:28:27.464 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:27.464 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:27.464 13:12:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:27.720 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.720 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:27.720 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.720 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.720 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.720 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:27.720 { 00:28:27.720 "auth": { 00:28:27.720 "dhgroup": "ffdhe2048", 00:28:27.720 "digest": "sha256", 00:28:27.720 "state": "completed" 00:28:27.720 }, 00:28:27.720 "cntlid": 13, 00:28:27.720 "listen_address": { 00:28:27.720 "adrfam": "IPv4", 00:28:27.720 "traddr": "10.0.0.2", 00:28:27.720 "trsvcid": "4420", 00:28:27.720 "trtype": "TCP" 00:28:27.720 }, 00:28:27.720 "peer_address": { 00:28:27.720 "adrfam": "IPv4", 00:28:27.720 "traddr": "10.0.0.1", 00:28:27.720 "trsvcid": "60434", 00:28:27.720 "trtype": "TCP" 00:28:27.720 }, 00:28:27.720 "qid": 0, 00:28:27.721 "state": "enabled", 00:28:27.721 "thread": "nvmf_tgt_poll_group_000" 00:28:27.721 } 00:28:27.721 ]' 00:28:27.721 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:27.721 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:27.721 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:27.978 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:27.978 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:27.978 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:27.978 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:27.978 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:28.235 13:12:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:28:29.605 13:12:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:29.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:29.605 13:12:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:29.605 13:12:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.605 13:12:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.605 13:12:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.605 13:12:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:29.605 13:12:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.605 13:12:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:29.862 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:28:29.862 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:29.863 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:30.120 00:28:30.378 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:30.378 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:30.378 13:12:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:30.651 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.651 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:30.651 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.651 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.651 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.651 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:30.651 { 00:28:30.651 "auth": { 00:28:30.651 "dhgroup": "ffdhe2048", 00:28:30.651 "digest": "sha256", 00:28:30.651 "state": "completed" 00:28:30.651 }, 00:28:30.651 "cntlid": 15, 00:28:30.651 "listen_address": { 00:28:30.651 "adrfam": "IPv4", 00:28:30.651 "traddr": "10.0.0.2", 00:28:30.651 "trsvcid": "4420", 00:28:30.651 "trtype": "TCP" 00:28:30.651 }, 00:28:30.651 "peer_address": { 00:28:30.651 "adrfam": "IPv4", 00:28:30.651 "traddr": "10.0.0.1", 00:28:30.651 "trsvcid": "60460", 00:28:30.651 "trtype": "TCP" 00:28:30.651 }, 00:28:30.651 "qid": 0, 00:28:30.651 "state": "enabled", 00:28:30.651 "thread": "nvmf_tgt_poll_group_000" 00:28:30.651 } 00:28:30.651 ]' 00:28:30.651 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:30.651 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:30.651 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:30.907 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:30.907 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:30.907 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:30.907 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:30.907 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:31.163 13:12:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:28:32.096 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:32.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:32.096 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:32.096 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.096 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.096 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.096 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:32.096 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:32.096 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:32.096 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.661 13:12:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.919 00:28:32.919 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:32.919 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:32.919 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:33.484 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.484 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:33.484 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.484 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.484 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.484 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:33.484 { 00:28:33.484 "auth": { 00:28:33.484 "dhgroup": "ffdhe3072", 00:28:33.484 "digest": "sha256", 00:28:33.484 "state": "completed" 00:28:33.484 }, 00:28:33.484 "cntlid": 17, 00:28:33.485 "listen_address": { 00:28:33.485 "adrfam": "IPv4", 00:28:33.485 "traddr": "10.0.0.2", 00:28:33.485 "trsvcid": "4420", 00:28:33.485 "trtype": "TCP" 00:28:33.485 }, 00:28:33.485 "peer_address": { 00:28:33.485 "adrfam": "IPv4", 00:28:33.485 "traddr": "10.0.0.1", 00:28:33.485 "trsvcid": "46134", 00:28:33.485 "trtype": "TCP" 00:28:33.485 }, 00:28:33.485 "qid": 0, 00:28:33.485 "state": "enabled", 00:28:33.485 "thread": "nvmf_tgt_poll_group_000" 00:28:33.485 } 00:28:33.485 ]' 00:28:33.485 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:33.485 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:33.485 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:33.485 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:33.485 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:33.485 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:33.485 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:33.485 13:12:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:34.049 13:12:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:28:34.981 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:34.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:34.981 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:34.981 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.981 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.981 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.981 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:34.981 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:34.981 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.239 13:12:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.826 00:28:35.826 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:35.826 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:35.826 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:36.084 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.084 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:36.084 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.084 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:36.342 { 00:28:36.342 "auth": { 00:28:36.342 "dhgroup": "ffdhe3072", 00:28:36.342 "digest": "sha256", 00:28:36.342 "state": "completed" 00:28:36.342 }, 00:28:36.342 "cntlid": 19, 00:28:36.342 "listen_address": { 00:28:36.342 "adrfam": "IPv4", 00:28:36.342 "traddr": "10.0.0.2", 00:28:36.342 "trsvcid": "4420", 00:28:36.342 "trtype": "TCP" 00:28:36.342 }, 00:28:36.342 "peer_address": { 00:28:36.342 "adrfam": "IPv4", 00:28:36.342 "traddr": "10.0.0.1", 00:28:36.342 "trsvcid": "46146", 00:28:36.342 "trtype": "TCP" 00:28:36.342 }, 00:28:36.342 "qid": 0, 00:28:36.342 "state": "enabled", 00:28:36.342 "thread": "nvmf_tgt_poll_group_000" 00:28:36.342 } 00:28:36.342 ]' 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:36.342 13:12:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:36.907 13:12:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:28:37.839 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:37.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:37.839 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:37.839 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.839 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:37.839 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.839 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:37.839 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:37.839 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.097 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.666 00:28:38.666 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:38.666 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:38.666 13:12:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:38.924 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.924 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:38.924 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.924 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.924 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.924 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:38.924 { 00:28:38.924 "auth": { 00:28:38.924 "dhgroup": "ffdhe3072", 00:28:38.924 "digest": "sha256", 00:28:38.924 "state": "completed" 00:28:38.924 }, 00:28:38.924 "cntlid": 21, 00:28:38.924 "listen_address": { 00:28:38.924 "adrfam": "IPv4", 00:28:38.924 "traddr": "10.0.0.2", 00:28:38.924 "trsvcid": "4420", 00:28:38.924 "trtype": "TCP" 00:28:38.924 }, 00:28:38.924 "peer_address": { 00:28:38.924 "adrfam": "IPv4", 00:28:38.924 "traddr": "10.0.0.1", 00:28:38.924 "trsvcid": "46178", 00:28:38.924 "trtype": "TCP" 00:28:38.924 }, 00:28:38.924 "qid": 0, 00:28:38.924 "state": "enabled", 00:28:38.924 "thread": "nvmf_tgt_poll_group_000" 00:28:38.924 } 00:28:38.924 ]' 00:28:38.924 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:39.184 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:39.184 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:39.184 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:39.185 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:39.185 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:39.185 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:39.185 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:39.750 13:12:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:28:40.316 13:12:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:40.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:40.316 13:12:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:40.316 13:12:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.316 13:12:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:40.316 13:12:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.316 13:12:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:40.316 13:12:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:40.316 13:12:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:40.882 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:41.139 00:28:41.139 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:41.139 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:41.139 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:41.397 { 00:28:41.397 "auth": { 00:28:41.397 "dhgroup": "ffdhe3072", 00:28:41.397 "digest": "sha256", 00:28:41.397 "state": "completed" 00:28:41.397 }, 00:28:41.397 "cntlid": 23, 00:28:41.397 "listen_address": { 00:28:41.397 "adrfam": "IPv4", 00:28:41.397 "traddr": "10.0.0.2", 00:28:41.397 "trsvcid": "4420", 00:28:41.397 "trtype": "TCP" 00:28:41.397 }, 00:28:41.397 "peer_address": { 00:28:41.397 "adrfam": "IPv4", 00:28:41.397 "traddr": "10.0.0.1", 00:28:41.397 "trsvcid": "46212", 00:28:41.397 "trtype": "TCP" 00:28:41.397 }, 00:28:41.397 "qid": 0, 00:28:41.397 "state": "enabled", 00:28:41.397 "thread": "nvmf_tgt_poll_group_000" 00:28:41.397 } 00:28:41.397 ]' 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:41.397 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:41.654 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:41.654 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:41.654 13:12:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:41.912 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:28:42.478 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:42.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:42.478 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:42.478 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.478 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:42.478 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.478 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.478 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:42.478 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.478 13:12:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.738 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.302 00:28:43.302 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:43.302 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:43.302 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:43.561 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.561 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:43.561 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.561 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.561 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.561 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:43.561 { 00:28:43.561 "auth": { 00:28:43.561 "dhgroup": "ffdhe4096", 00:28:43.561 "digest": "sha256", 00:28:43.561 "state": "completed" 00:28:43.561 }, 00:28:43.561 "cntlid": 25, 00:28:43.561 "listen_address": { 00:28:43.561 "adrfam": "IPv4", 00:28:43.561 "traddr": "10.0.0.2", 00:28:43.561 "trsvcid": "4420", 00:28:43.561 "trtype": "TCP" 00:28:43.561 }, 00:28:43.561 "peer_address": { 00:28:43.561 "adrfam": "IPv4", 00:28:43.561 "traddr": "10.0.0.1", 00:28:43.561 "trsvcid": "41280", 00:28:43.561 "trtype": "TCP" 00:28:43.561 }, 00:28:43.561 "qid": 0, 00:28:43.561 "state": "enabled", 00:28:43.561 "thread": "nvmf_tgt_poll_group_000" 00:28:43.561 } 00:28:43.561 ]' 00:28:43.561 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:43.561 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:43.561 13:12:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:43.819 13:12:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:43.819 13:12:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:43.819 13:12:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:43.819 13:12:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:43.819 13:12:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:44.077 13:12:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:28:45.010 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:45.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:45.010 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:45.010 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.010 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.010 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.010 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:45.010 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:45.010 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.267 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.525 00:28:45.525 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:45.525 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:45.525 13:12:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:45.783 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.783 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:45.783 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.783 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:46.041 { 00:28:46.041 "auth": { 00:28:46.041 "dhgroup": "ffdhe4096", 00:28:46.041 "digest": "sha256", 00:28:46.041 "state": "completed" 00:28:46.041 }, 00:28:46.041 "cntlid": 27, 00:28:46.041 "listen_address": { 00:28:46.041 "adrfam": "IPv4", 00:28:46.041 "traddr": "10.0.0.2", 00:28:46.041 "trsvcid": "4420", 00:28:46.041 "trtype": "TCP" 00:28:46.041 }, 00:28:46.041 "peer_address": { 00:28:46.041 "adrfam": "IPv4", 00:28:46.041 "traddr": "10.0.0.1", 00:28:46.041 "trsvcid": "41322", 00:28:46.041 "trtype": "TCP" 00:28:46.041 }, 00:28:46.041 "qid": 0, 00:28:46.041 "state": "enabled", 00:28:46.041 "thread": "nvmf_tgt_poll_group_000" 00:28:46.041 } 00:28:46.041 ]' 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:46.041 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:46.299 13:12:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:28:47.250 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:47.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:47.250 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:47.250 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.250 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.250 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.250 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:47.250 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:47.250 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:47.508 13:12:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.074 00:28:48.074 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:48.074 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:48.074 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:48.332 { 00:28:48.332 "auth": { 00:28:48.332 "dhgroup": "ffdhe4096", 00:28:48.332 "digest": "sha256", 00:28:48.332 "state": "completed" 00:28:48.332 }, 00:28:48.332 "cntlid": 29, 00:28:48.332 "listen_address": { 00:28:48.332 "adrfam": "IPv4", 00:28:48.332 "traddr": "10.0.0.2", 00:28:48.332 "trsvcid": "4420", 00:28:48.332 "trtype": "TCP" 00:28:48.332 }, 00:28:48.332 "peer_address": { 00:28:48.332 "adrfam": "IPv4", 00:28:48.332 "traddr": "10.0.0.1", 00:28:48.332 "trsvcid": "41362", 00:28:48.332 "trtype": "TCP" 00:28:48.332 }, 00:28:48.332 "qid": 0, 00:28:48.332 "state": "enabled", 00:28:48.332 "thread": "nvmf_tgt_poll_group_000" 00:28:48.332 } 00:28:48.332 ]' 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:48.332 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:48.590 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:48.590 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:48.590 13:13:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:48.848 13:13:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:28:49.812 13:13:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:49.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:49.812 13:13:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:49.812 13:13:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.812 13:13:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:49.812 13:13:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.812 13:13:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:49.812 13:13:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:49.812 13:13:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:50.071 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:50.329 00:28:50.329 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:50.329 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:50.329 13:13:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:50.902 { 00:28:50.902 "auth": { 00:28:50.902 "dhgroup": "ffdhe4096", 00:28:50.902 "digest": "sha256", 00:28:50.902 "state": "completed" 00:28:50.902 }, 00:28:50.902 "cntlid": 31, 00:28:50.902 "listen_address": { 00:28:50.902 "adrfam": "IPv4", 00:28:50.902 "traddr": "10.0.0.2", 00:28:50.902 "trsvcid": "4420", 00:28:50.902 "trtype": "TCP" 00:28:50.902 }, 00:28:50.902 "peer_address": { 00:28:50.902 "adrfam": "IPv4", 00:28:50.902 "traddr": "10.0.0.1", 00:28:50.902 "trsvcid": "41380", 00:28:50.902 "trtype": "TCP" 00:28:50.902 }, 00:28:50.902 "qid": 0, 00:28:50.902 "state": "enabled", 00:28:50.902 "thread": "nvmf_tgt_poll_group_000" 00:28:50.902 } 00:28:50.902 ]' 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:50.902 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:50.903 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:50.903 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:51.161 13:13:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:28:52.092 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:52.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:52.093 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:52.093 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.093 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.093 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.093 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.093 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:52.093 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:52.093 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:52.350 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:28:52.350 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:52.350 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:52.351 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:52.351 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:52.351 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:52.351 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.351 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.351 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.351 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.351 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.351 13:13:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.917 00:28:52.917 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:52.917 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:52.917 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:53.218 { 00:28:53.218 "auth": { 00:28:53.218 "dhgroup": "ffdhe6144", 00:28:53.218 "digest": "sha256", 00:28:53.218 "state": "completed" 00:28:53.218 }, 00:28:53.218 "cntlid": 33, 00:28:53.218 "listen_address": { 00:28:53.218 "adrfam": "IPv4", 00:28:53.218 "traddr": "10.0.0.2", 00:28:53.218 "trsvcid": "4420", 00:28:53.218 "trtype": "TCP" 00:28:53.218 }, 00:28:53.218 "peer_address": { 00:28:53.218 "adrfam": "IPv4", 00:28:53.218 "traddr": "10.0.0.1", 00:28:53.218 "trsvcid": "36480", 00:28:53.218 "trtype": "TCP" 00:28:53.218 }, 00:28:53.218 "qid": 0, 00:28:53.218 "state": "enabled", 00:28:53.218 "thread": "nvmf_tgt_poll_group_000" 00:28:53.218 } 00:28:53.218 ]' 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:53.218 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:53.476 13:13:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:28:54.410 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:54.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:54.410 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:54.410 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.410 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.410 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.411 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:54.411 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:54.411 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.672 13:13:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.251 00:28:55.251 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:55.251 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:55.251 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:55.517 { 00:28:55.517 "auth": { 00:28:55.517 "dhgroup": "ffdhe6144", 00:28:55.517 "digest": "sha256", 00:28:55.517 "state": "completed" 00:28:55.517 }, 00:28:55.517 "cntlid": 35, 00:28:55.517 "listen_address": { 00:28:55.517 "adrfam": "IPv4", 00:28:55.517 "traddr": "10.0.0.2", 00:28:55.517 "trsvcid": "4420", 00:28:55.517 "trtype": "TCP" 00:28:55.517 }, 00:28:55.517 "peer_address": { 00:28:55.517 "adrfam": "IPv4", 00:28:55.517 "traddr": "10.0.0.1", 00:28:55.517 "trsvcid": "36502", 00:28:55.517 "trtype": "TCP" 00:28:55.517 }, 00:28:55.517 "qid": 0, 00:28:55.517 "state": "enabled", 00:28:55.517 "thread": "nvmf_tgt_poll_group_000" 00:28:55.517 } 00:28:55.517 ]' 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:55.517 13:13:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:55.784 13:13:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:28:56.747 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:56.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:56.747 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:56.747 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.747 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.747 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.747 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:56.747 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:56.747 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.006 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.570 00:28:57.570 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:57.570 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:57.570 13:13:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:57.826 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.826 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:57.826 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.826 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.826 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.826 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:57.826 { 00:28:57.826 "auth": { 00:28:57.826 "dhgroup": "ffdhe6144", 00:28:57.826 "digest": "sha256", 00:28:57.826 "state": "completed" 00:28:57.826 }, 00:28:57.826 "cntlid": 37, 00:28:57.826 "listen_address": { 00:28:57.826 "adrfam": "IPv4", 00:28:57.826 "traddr": "10.0.0.2", 00:28:57.826 "trsvcid": "4420", 00:28:57.826 "trtype": "TCP" 00:28:57.826 }, 00:28:57.826 "peer_address": { 00:28:57.826 "adrfam": "IPv4", 00:28:57.826 "traddr": "10.0.0.1", 00:28:57.826 "trsvcid": "36526", 00:28:57.826 "trtype": "TCP" 00:28:57.826 }, 00:28:57.826 "qid": 0, 00:28:57.826 "state": "enabled", 00:28:57.826 "thread": "nvmf_tgt_poll_group_000" 00:28:57.826 } 00:28:57.826 ]' 00:28:57.826 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:58.084 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:58.084 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:58.084 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:58.084 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:58.084 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:58.084 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:58.084 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:58.341 13:13:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:28:59.274 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:59.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:59.274 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:28:59.274 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.274 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.274 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.274 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:59.274 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:59.274 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:59.532 13:13:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:00.097 00:29:00.097 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:00.097 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:00.097 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:00.355 { 00:29:00.355 "auth": { 00:29:00.355 "dhgroup": "ffdhe6144", 00:29:00.355 "digest": "sha256", 00:29:00.355 "state": "completed" 00:29:00.355 }, 00:29:00.355 "cntlid": 39, 00:29:00.355 "listen_address": { 00:29:00.355 "adrfam": "IPv4", 00:29:00.355 "traddr": "10.0.0.2", 00:29:00.355 "trsvcid": "4420", 00:29:00.355 "trtype": "TCP" 00:29:00.355 }, 00:29:00.355 "peer_address": { 00:29:00.355 "adrfam": "IPv4", 00:29:00.355 "traddr": "10.0.0.1", 00:29:00.355 "trsvcid": "36566", 00:29:00.355 "trtype": "TCP" 00:29:00.355 }, 00:29:00.355 "qid": 0, 00:29:00.355 "state": "enabled", 00:29:00.355 "thread": "nvmf_tgt_poll_group_000" 00:29:00.355 } 00:29:00.355 ]' 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:00.355 13:13:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:00.613 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:29:01.626 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:01.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:01.627 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:01.627 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.627 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.627 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.627 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:01.627 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:01.627 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:01.627 13:13:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:01.884 13:13:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:02.857 00:29:02.857 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:02.857 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:02.857 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:03.422 { 00:29:03.422 "auth": { 00:29:03.422 "dhgroup": "ffdhe8192", 00:29:03.422 "digest": "sha256", 00:29:03.422 "state": "completed" 00:29:03.422 }, 00:29:03.422 "cntlid": 41, 00:29:03.422 "listen_address": { 00:29:03.422 "adrfam": "IPv4", 00:29:03.422 "traddr": "10.0.0.2", 00:29:03.422 "trsvcid": "4420", 00:29:03.422 "trtype": "TCP" 00:29:03.422 }, 00:29:03.422 "peer_address": { 00:29:03.422 "adrfam": "IPv4", 00:29:03.422 "traddr": "10.0.0.1", 00:29:03.422 "trsvcid": "41470", 00:29:03.422 "trtype": "TCP" 00:29:03.422 }, 00:29:03.422 "qid": 0, 00:29:03.422 "state": "enabled", 00:29:03.422 "thread": "nvmf_tgt_poll_group_000" 00:29:03.422 } 00:29:03.422 ]' 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:03.422 13:13:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:03.987 13:13:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:29:04.552 13:13:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:04.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:04.552 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:04.552 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.552 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.809 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.809 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:04.809 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:04.809 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.067 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.631 00:29:05.631 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:05.631 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:05.631 13:13:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:05.900 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.900 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:05.900 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.900 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.900 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.900 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:05.900 { 00:29:05.900 "auth": { 00:29:05.900 "dhgroup": "ffdhe8192", 00:29:05.900 "digest": "sha256", 00:29:05.900 "state": "completed" 00:29:05.900 }, 00:29:05.900 "cntlid": 43, 00:29:05.900 "listen_address": { 00:29:05.900 "adrfam": "IPv4", 00:29:05.900 "traddr": "10.0.0.2", 00:29:05.900 "trsvcid": "4420", 00:29:05.900 "trtype": "TCP" 00:29:05.900 }, 00:29:05.900 "peer_address": { 00:29:05.900 "adrfam": "IPv4", 00:29:05.900 "traddr": "10.0.0.1", 00:29:05.900 "trsvcid": "41500", 00:29:05.900 "trtype": "TCP" 00:29:05.900 }, 00:29:05.900 "qid": 0, 00:29:05.900 "state": "enabled", 00:29:05.900 "thread": "nvmf_tgt_poll_group_000" 00:29:05.900 } 00:29:05.900 ]' 00:29:05.900 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:05.900 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:05.900 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:06.157 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:06.157 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:06.157 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:06.157 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:06.157 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:06.415 13:13:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:29:07.350 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:07.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:07.351 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:07.351 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.351 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.351 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.351 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:07.351 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:07.351 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.608 13:13:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.172 00:29:08.172 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:08.172 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:08.172 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:08.430 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.430 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:08.430 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.430 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:08.430 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.430 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:08.430 { 00:29:08.430 "auth": { 00:29:08.430 "dhgroup": "ffdhe8192", 00:29:08.430 "digest": "sha256", 00:29:08.430 "state": "completed" 00:29:08.430 }, 00:29:08.430 "cntlid": 45, 00:29:08.430 "listen_address": { 00:29:08.430 "adrfam": "IPv4", 00:29:08.430 "traddr": "10.0.0.2", 00:29:08.430 "trsvcid": "4420", 00:29:08.430 "trtype": "TCP" 00:29:08.430 }, 00:29:08.430 "peer_address": { 00:29:08.430 "adrfam": "IPv4", 00:29:08.430 "traddr": "10.0.0.1", 00:29:08.430 "trsvcid": "41516", 00:29:08.430 "trtype": "TCP" 00:29:08.430 }, 00:29:08.430 "qid": 0, 00:29:08.430 "state": "enabled", 00:29:08.430 "thread": "nvmf_tgt_poll_group_000" 00:29:08.430 } 00:29:08.430 ]' 00:29:08.430 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:08.688 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:08.688 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:08.688 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:08.688 13:13:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:08.688 13:13:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:08.688 13:13:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:08.688 13:13:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:08.946 13:13:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:29:09.878 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:09.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:09.878 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:09.878 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.878 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.878 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.878 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:09.878 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:09.878 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:10.136 13:13:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:11.068 00:29:11.068 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:11.068 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:11.068 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:11.326 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.326 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:11.326 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.326 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:11.326 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.326 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:11.326 { 00:29:11.326 "auth": { 00:29:11.326 "dhgroup": "ffdhe8192", 00:29:11.326 "digest": "sha256", 00:29:11.326 "state": "completed" 00:29:11.326 }, 00:29:11.326 "cntlid": 47, 00:29:11.326 "listen_address": { 00:29:11.326 "adrfam": "IPv4", 00:29:11.326 "traddr": "10.0.0.2", 00:29:11.326 "trsvcid": "4420", 00:29:11.326 "trtype": "TCP" 00:29:11.326 }, 00:29:11.326 "peer_address": { 00:29:11.326 "adrfam": "IPv4", 00:29:11.326 "traddr": "10.0.0.1", 00:29:11.326 "trsvcid": "41540", 00:29:11.326 "trtype": "TCP" 00:29:11.326 }, 00:29:11.326 "qid": 0, 00:29:11.326 "state": "enabled", 00:29:11.326 "thread": "nvmf_tgt_poll_group_000" 00:29:11.326 } 00:29:11.326 ]' 00:29:11.326 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:11.326 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:29:11.326 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:11.583 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:11.583 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:11.583 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:11.583 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:11.583 13:13:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:11.841 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:29:12.406 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:12.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:12.406 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:12.406 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.406 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.663 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.663 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:29:12.663 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:12.663 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:12.663 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:12.663 13:13:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:12.920 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:13.177 00:29:13.177 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:13.177 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:13.177 13:13:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:13.742 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.742 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:13.742 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.742 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.742 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.742 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:13.742 { 00:29:13.742 "auth": { 00:29:13.742 "dhgroup": "null", 00:29:13.742 "digest": "sha384", 00:29:13.742 "state": "completed" 00:29:13.742 }, 00:29:13.742 "cntlid": 49, 00:29:13.742 "listen_address": { 00:29:13.742 "adrfam": "IPv4", 00:29:13.742 "traddr": "10.0.0.2", 00:29:13.742 "trsvcid": "4420", 00:29:13.742 "trtype": "TCP" 00:29:13.742 }, 00:29:13.742 "peer_address": { 00:29:13.742 "adrfam": "IPv4", 00:29:13.742 "traddr": "10.0.0.1", 00:29:13.742 "trsvcid": "51752", 00:29:13.742 "trtype": "TCP" 00:29:13.742 }, 00:29:13.742 "qid": 0, 00:29:13.742 "state": "enabled", 00:29:13.742 "thread": "nvmf_tgt_poll_group_000" 00:29:13.742 } 00:29:13.742 ]' 00:29:13.742 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:13.742 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:13.742 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:13.999 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:13.999 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:13.999 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:13.999 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:13.999 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:14.263 13:13:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:29:14.835 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:14.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:14.835 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:14.835 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.835 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.835 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.835 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:14.835 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:14.835 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.093 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:15.659 00:29:15.659 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:15.659 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:15.659 13:13:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:15.659 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.659 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:15.659 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.659 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:15.659 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.659 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:15.659 { 00:29:15.659 "auth": { 00:29:15.659 "dhgroup": "null", 00:29:15.659 "digest": "sha384", 00:29:15.659 "state": "completed" 00:29:15.659 }, 00:29:15.659 "cntlid": 51, 00:29:15.659 "listen_address": { 00:29:15.659 "adrfam": "IPv4", 00:29:15.659 "traddr": "10.0.0.2", 00:29:15.659 "trsvcid": "4420", 00:29:15.659 "trtype": "TCP" 00:29:15.659 }, 00:29:15.659 "peer_address": { 00:29:15.659 "adrfam": "IPv4", 00:29:15.659 "traddr": "10.0.0.1", 00:29:15.659 "trsvcid": "51790", 00:29:15.659 "trtype": "TCP" 00:29:15.659 }, 00:29:15.659 "qid": 0, 00:29:15.659 "state": "enabled", 00:29:15.659 "thread": "nvmf_tgt_poll_group_000" 00:29:15.659 } 00:29:15.659 ]' 00:29:15.659 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:15.917 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:15.917 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:15.917 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:15.917 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:15.917 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:15.917 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:15.917 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:16.175 13:13:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:29:16.739 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:16.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:16.739 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:16.739 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.739 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:16.739 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.739 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:16.739 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:16.739 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:16.996 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.254 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.512 00:29:17.512 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:17.512 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:17.512 13:13:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:17.770 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.770 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:17.770 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.770 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.770 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.770 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:17.770 { 00:29:17.770 "auth": { 00:29:17.770 "dhgroup": "null", 00:29:17.770 "digest": "sha384", 00:29:17.770 "state": "completed" 00:29:17.770 }, 00:29:17.770 "cntlid": 53, 00:29:17.770 "listen_address": { 00:29:17.770 "adrfam": "IPv4", 00:29:17.770 "traddr": "10.0.0.2", 00:29:17.770 "trsvcid": "4420", 00:29:17.770 "trtype": "TCP" 00:29:17.770 }, 00:29:17.770 "peer_address": { 00:29:17.770 "adrfam": "IPv4", 00:29:17.770 "traddr": "10.0.0.1", 00:29:17.770 "trsvcid": "51816", 00:29:17.770 "trtype": "TCP" 00:29:17.770 }, 00:29:17.770 "qid": 0, 00:29:17.770 "state": "enabled", 00:29:17.770 "thread": "nvmf_tgt_poll_group_000" 00:29:17.770 } 00:29:17.770 ]' 00:29:17.770 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:17.770 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:17.770 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:18.027 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:18.028 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:18.028 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:18.028 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:18.028 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:18.285 13:13:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:19.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:19.219 13:13:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:19.783 00:29:19.783 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:19.783 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:19.783 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:20.040 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.040 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:20.040 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.040 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.040 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.040 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:20.040 { 00:29:20.040 "auth": { 00:29:20.040 "dhgroup": "null", 00:29:20.040 "digest": "sha384", 00:29:20.040 "state": "completed" 00:29:20.040 }, 00:29:20.040 "cntlid": 55, 00:29:20.040 "listen_address": { 00:29:20.040 "adrfam": "IPv4", 00:29:20.040 "traddr": "10.0.0.2", 00:29:20.040 "trsvcid": "4420", 00:29:20.040 "trtype": "TCP" 00:29:20.040 }, 00:29:20.040 "peer_address": { 00:29:20.040 "adrfam": "IPv4", 00:29:20.040 "traddr": "10.0.0.1", 00:29:20.040 "trsvcid": "51840", 00:29:20.040 "trtype": "TCP" 00:29:20.040 }, 00:29:20.040 "qid": 0, 00:29:20.040 "state": "enabled", 00:29:20.040 "thread": "nvmf_tgt_poll_group_000" 00:29:20.040 } 00:29:20.040 ]' 00:29:20.040 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:20.297 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:20.297 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:20.297 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:20.297 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:20.297 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:20.297 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:20.297 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:20.555 13:13:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:29:21.487 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:21.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:21.487 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:21.487 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.487 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:21.487 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.487 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.487 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:21.487 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:21.487 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.746 13:13:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:22.004 00:29:22.004 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:22.004 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:22.004 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:22.261 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.261 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:22.261 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.261 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:22.261 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.261 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:22.261 { 00:29:22.261 "auth": { 00:29:22.261 "dhgroup": "ffdhe2048", 00:29:22.261 "digest": "sha384", 00:29:22.261 "state": "completed" 00:29:22.261 }, 00:29:22.261 "cntlid": 57, 00:29:22.261 "listen_address": { 00:29:22.261 "adrfam": "IPv4", 00:29:22.261 "traddr": "10.0.0.2", 00:29:22.261 "trsvcid": "4420", 00:29:22.261 "trtype": "TCP" 00:29:22.261 }, 00:29:22.261 "peer_address": { 00:29:22.261 "adrfam": "IPv4", 00:29:22.261 "traddr": "10.0.0.1", 00:29:22.261 "trsvcid": "51880", 00:29:22.261 "trtype": "TCP" 00:29:22.261 }, 00:29:22.261 "qid": 0, 00:29:22.261 "state": "enabled", 00:29:22.261 "thread": "nvmf_tgt_poll_group_000" 00:29:22.261 } 00:29:22.261 ]' 00:29:22.261 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:22.261 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:22.261 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:22.518 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:22.518 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:22.518 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:22.518 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:22.518 13:13:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:22.775 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:29:23.340 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:23.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:23.340 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:23.340 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.340 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:23.340 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.340 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:23.340 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:23.340 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:23.598 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:29:23.598 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:23.598 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:23.598 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:23.598 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:23.598 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:23.598 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.598 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.598 13:13:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:23.598 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.598 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.598 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.856 00:29:24.113 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:24.114 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:24.114 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:24.371 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.371 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:24.371 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.371 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.371 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.371 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:24.371 { 00:29:24.371 "auth": { 00:29:24.371 "dhgroup": "ffdhe2048", 00:29:24.371 "digest": "sha384", 00:29:24.372 "state": "completed" 00:29:24.372 }, 00:29:24.372 "cntlid": 59, 00:29:24.372 "listen_address": { 00:29:24.372 "adrfam": "IPv4", 00:29:24.372 "traddr": "10.0.0.2", 00:29:24.372 "trsvcid": "4420", 00:29:24.372 "trtype": "TCP" 00:29:24.372 }, 00:29:24.372 "peer_address": { 00:29:24.372 "adrfam": "IPv4", 00:29:24.372 "traddr": "10.0.0.1", 00:29:24.372 "trsvcid": "45590", 00:29:24.372 "trtype": "TCP" 00:29:24.372 }, 00:29:24.372 "qid": 0, 00:29:24.372 "state": "enabled", 00:29:24.372 "thread": "nvmf_tgt_poll_group_000" 00:29:24.372 } 00:29:24.372 ]' 00:29:24.372 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:24.372 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:24.372 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:24.372 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:24.372 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:24.372 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:24.372 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:24.372 13:13:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:24.630 13:13:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:29:25.563 13:13:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:25.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:25.563 13:13:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:25.563 13:13:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.563 13:13:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.563 13:13:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.563 13:13:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:25.563 13:13:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:25.563 13:13:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:25.821 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:26.079 00:29:26.079 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:26.079 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:26.079 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:26.644 { 00:29:26.644 "auth": { 00:29:26.644 "dhgroup": "ffdhe2048", 00:29:26.644 "digest": "sha384", 00:29:26.644 "state": "completed" 00:29:26.644 }, 00:29:26.644 "cntlid": 61, 00:29:26.644 "listen_address": { 00:29:26.644 "adrfam": "IPv4", 00:29:26.644 "traddr": "10.0.0.2", 00:29:26.644 "trsvcid": "4420", 00:29:26.644 "trtype": "TCP" 00:29:26.644 }, 00:29:26.644 "peer_address": { 00:29:26.644 "adrfam": "IPv4", 00:29:26.644 "traddr": "10.0.0.1", 00:29:26.644 "trsvcid": "45618", 00:29:26.644 "trtype": "TCP" 00:29:26.644 }, 00:29:26.644 "qid": 0, 00:29:26.644 "state": "enabled", 00:29:26.644 "thread": "nvmf_tgt_poll_group_000" 00:29:26.644 } 00:29:26.644 ]' 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:26.644 13:13:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:26.902 13:13:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:29:27.835 13:13:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:27.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:27.835 13:13:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:27.835 13:13:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.835 13:13:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.835 13:13:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.835 13:13:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:27.835 13:13:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:27.835 13:13:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:27.835 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:29:27.835 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:27.835 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:27.835 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:27.836 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:27.836 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:27.836 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:29:27.836 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.836 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.836 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.836 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:27.836 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:28.401 00:29:28.401 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:28.401 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:28.401 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:28.659 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.659 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:28.659 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.659 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.659 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.659 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:28.659 { 00:29:28.659 "auth": { 00:29:28.659 "dhgroup": "ffdhe2048", 00:29:28.659 "digest": "sha384", 00:29:28.659 "state": "completed" 00:29:28.659 }, 00:29:28.659 "cntlid": 63, 00:29:28.659 "listen_address": { 00:29:28.659 "adrfam": "IPv4", 00:29:28.659 "traddr": "10.0.0.2", 00:29:28.659 "trsvcid": "4420", 00:29:28.659 "trtype": "TCP" 00:29:28.659 }, 00:29:28.659 "peer_address": { 00:29:28.659 "adrfam": "IPv4", 00:29:28.659 "traddr": "10.0.0.1", 00:29:28.659 "trsvcid": "45642", 00:29:28.659 "trtype": "TCP" 00:29:28.659 }, 00:29:28.659 "qid": 0, 00:29:28.659 "state": "enabled", 00:29:28.659 "thread": "nvmf_tgt_poll_group_000" 00:29:28.659 } 00:29:28.659 ]' 00:29:28.659 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:28.659 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:28.659 13:13:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:28.659 13:13:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:28.659 13:13:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:28.659 13:13:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:28.659 13:13:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:28.659 13:13:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:28.917 13:13:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:29:29.847 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:29.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:29.847 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:29.847 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.847 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.847 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.847 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:29.847 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:29.847 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:29.847 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.103 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:30.360 00:29:30.360 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:30.360 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:30.360 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:30.617 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.617 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:30.617 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.617 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.617 13:13:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.617 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:30.617 { 00:29:30.617 "auth": { 00:29:30.617 "dhgroup": "ffdhe3072", 00:29:30.617 "digest": "sha384", 00:29:30.617 "state": "completed" 00:29:30.617 }, 00:29:30.617 "cntlid": 65, 00:29:30.617 "listen_address": { 00:29:30.617 "adrfam": "IPv4", 00:29:30.617 "traddr": "10.0.0.2", 00:29:30.617 "trsvcid": "4420", 00:29:30.617 "trtype": "TCP" 00:29:30.617 }, 00:29:30.617 "peer_address": { 00:29:30.617 "adrfam": "IPv4", 00:29:30.617 "traddr": "10.0.0.1", 00:29:30.617 "trsvcid": "45686", 00:29:30.617 "trtype": "TCP" 00:29:30.617 }, 00:29:30.617 "qid": 0, 00:29:30.617 "state": "enabled", 00:29:30.617 "thread": "nvmf_tgt_poll_group_000" 00:29:30.617 } 00:29:30.617 ]' 00:29:30.617 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:30.617 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:30.617 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:30.880 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:30.880 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:30.880 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:30.880 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:30.880 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:31.136 13:13:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:29:32.078 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:32.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:32.079 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:32.079 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.079 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.079 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.079 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:32.079 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:32.079 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:32.337 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.338 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.598 00:29:32.598 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:32.598 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:32.598 13:13:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:32.860 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.860 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:32.860 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.860 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.860 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.860 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:32.860 { 00:29:32.860 "auth": { 00:29:32.860 "dhgroup": "ffdhe3072", 00:29:32.860 "digest": "sha384", 00:29:32.860 "state": "completed" 00:29:32.860 }, 00:29:32.860 "cntlid": 67, 00:29:32.860 "listen_address": { 00:29:32.860 "adrfam": "IPv4", 00:29:32.860 "traddr": "10.0.0.2", 00:29:32.860 "trsvcid": "4420", 00:29:32.860 "trtype": "TCP" 00:29:32.860 }, 00:29:32.860 "peer_address": { 00:29:32.860 "adrfam": "IPv4", 00:29:32.860 "traddr": "10.0.0.1", 00:29:32.860 "trsvcid": "33178", 00:29:32.860 "trtype": "TCP" 00:29:32.860 }, 00:29:32.860 "qid": 0, 00:29:32.860 "state": "enabled", 00:29:32.860 "thread": "nvmf_tgt_poll_group_000" 00:29:32.860 } 00:29:32.860 ]' 00:29:32.860 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:33.123 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:33.123 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:33.123 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:33.123 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:33.123 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:33.123 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:33.123 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:33.388 13:13:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:29:34.357 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:34.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:34.358 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:34.358 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.358 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.358 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.358 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:34.358 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:34.358 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:34.620 13:13:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:34.878 00:29:34.878 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:34.878 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:34.878 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:35.136 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.136 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:35.136 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.136 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.136 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.136 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:35.136 { 00:29:35.136 "auth": { 00:29:35.136 "dhgroup": "ffdhe3072", 00:29:35.136 "digest": "sha384", 00:29:35.136 "state": "completed" 00:29:35.136 }, 00:29:35.136 "cntlid": 69, 00:29:35.136 "listen_address": { 00:29:35.136 "adrfam": "IPv4", 00:29:35.136 "traddr": "10.0.0.2", 00:29:35.136 "trsvcid": "4420", 00:29:35.136 "trtype": "TCP" 00:29:35.136 }, 00:29:35.136 "peer_address": { 00:29:35.136 "adrfam": "IPv4", 00:29:35.136 "traddr": "10.0.0.1", 00:29:35.136 "trsvcid": "33206", 00:29:35.136 "trtype": "TCP" 00:29:35.136 }, 00:29:35.136 "qid": 0, 00:29:35.136 "state": "enabled", 00:29:35.136 "thread": "nvmf_tgt_poll_group_000" 00:29:35.136 } 00:29:35.136 ]' 00:29:35.136 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:35.395 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:35.395 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:35.395 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:35.395 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:35.395 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:35.395 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:35.395 13:13:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:35.653 13:13:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:29:36.587 13:13:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:36.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:36.587 13:13:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:36.587 13:13:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.587 13:13:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.587 13:13:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.587 13:13:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:36.587 13:13:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:36.587 13:13:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:36.845 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:37.103 00:29:37.103 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:37.103 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:37.103 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:37.361 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.361 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:37.361 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.361 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.361 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.361 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:37.361 { 00:29:37.361 "auth": { 00:29:37.361 "dhgroup": "ffdhe3072", 00:29:37.361 "digest": "sha384", 00:29:37.361 "state": "completed" 00:29:37.361 }, 00:29:37.361 "cntlid": 71, 00:29:37.361 "listen_address": { 00:29:37.361 "adrfam": "IPv4", 00:29:37.361 "traddr": "10.0.0.2", 00:29:37.361 "trsvcid": "4420", 00:29:37.361 "trtype": "TCP" 00:29:37.361 }, 00:29:37.361 "peer_address": { 00:29:37.361 "adrfam": "IPv4", 00:29:37.361 "traddr": "10.0.0.1", 00:29:37.361 "trsvcid": "33238", 00:29:37.361 "trtype": "TCP" 00:29:37.361 }, 00:29:37.361 "qid": 0, 00:29:37.361 "state": "enabled", 00:29:37.361 "thread": "nvmf_tgt_poll_group_000" 00:29:37.361 } 00:29:37.361 ]' 00:29:37.361 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:37.619 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:37.619 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:37.619 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:37.619 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:37.619 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:37.619 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:37.619 13:13:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:37.878 13:13:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:29:38.829 13:13:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:38.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:38.829 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:38.829 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.829 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.829 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.829 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:38.829 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:38.829 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:38.829 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.086 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:39.087 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:39.344 00:29:39.344 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:39.344 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:39.344 13:13:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:39.602 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.602 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:39.602 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.602 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:39.602 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.602 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:39.602 { 00:29:39.602 "auth": { 00:29:39.602 "dhgroup": "ffdhe4096", 00:29:39.602 "digest": "sha384", 00:29:39.602 "state": "completed" 00:29:39.602 }, 00:29:39.602 "cntlid": 73, 00:29:39.602 "listen_address": { 00:29:39.602 "adrfam": "IPv4", 00:29:39.602 "traddr": "10.0.0.2", 00:29:39.602 "trsvcid": "4420", 00:29:39.602 "trtype": "TCP" 00:29:39.602 }, 00:29:39.602 "peer_address": { 00:29:39.602 "adrfam": "IPv4", 00:29:39.602 "traddr": "10.0.0.1", 00:29:39.602 "trsvcid": "33262", 00:29:39.602 "trtype": "TCP" 00:29:39.602 }, 00:29:39.602 "qid": 0, 00:29:39.602 "state": "enabled", 00:29:39.602 "thread": "nvmf_tgt_poll_group_000" 00:29:39.602 } 00:29:39.602 ]' 00:29:39.602 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:39.859 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:39.859 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:39.859 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:39.859 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:39.859 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:39.859 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:39.859 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:40.424 13:13:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:29:40.990 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:40.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:40.990 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:40.990 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.990 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.990 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.990 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:40.990 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:40.990 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.554 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.555 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:41.555 13:13:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:41.833 00:29:41.833 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:41.833 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:41.833 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:42.138 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.138 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:42.138 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.138 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.138 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.138 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:42.138 { 00:29:42.138 "auth": { 00:29:42.138 "dhgroup": "ffdhe4096", 00:29:42.138 "digest": "sha384", 00:29:42.138 "state": "completed" 00:29:42.138 }, 00:29:42.138 "cntlid": 75, 00:29:42.138 "listen_address": { 00:29:42.138 "adrfam": "IPv4", 00:29:42.138 "traddr": "10.0.0.2", 00:29:42.138 "trsvcid": "4420", 00:29:42.138 "trtype": "TCP" 00:29:42.138 }, 00:29:42.139 "peer_address": { 00:29:42.139 "adrfam": "IPv4", 00:29:42.139 "traddr": "10.0.0.1", 00:29:42.139 "trsvcid": "33300", 00:29:42.139 "trtype": "TCP" 00:29:42.139 }, 00:29:42.139 "qid": 0, 00:29:42.139 "state": "enabled", 00:29:42.139 "thread": "nvmf_tgt_poll_group_000" 00:29:42.139 } 00:29:42.139 ]' 00:29:42.139 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:42.139 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:42.139 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:42.139 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:42.139 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:42.397 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:42.397 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:42.397 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:42.654 13:13:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:29:43.219 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:43.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:43.219 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:43.219 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.219 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.219 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.219 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:43.219 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:43.219 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:43.476 13:13:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:44.044 00:29:44.044 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:44.044 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:44.044 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:44.302 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.302 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:44.302 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.302 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:44.302 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.302 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:44.302 { 00:29:44.302 "auth": { 00:29:44.302 "dhgroup": "ffdhe4096", 00:29:44.302 "digest": "sha384", 00:29:44.302 "state": "completed" 00:29:44.302 }, 00:29:44.302 "cntlid": 77, 00:29:44.302 "listen_address": { 00:29:44.302 "adrfam": "IPv4", 00:29:44.302 "traddr": "10.0.0.2", 00:29:44.302 "trsvcid": "4420", 00:29:44.302 "trtype": "TCP" 00:29:44.302 }, 00:29:44.302 "peer_address": { 00:29:44.302 "adrfam": "IPv4", 00:29:44.302 "traddr": "10.0.0.1", 00:29:44.303 "trsvcid": "46538", 00:29:44.303 "trtype": "TCP" 00:29:44.303 }, 00:29:44.303 "qid": 0, 00:29:44.303 "state": "enabled", 00:29:44.303 "thread": "nvmf_tgt_poll_group_000" 00:29:44.303 } 00:29:44.303 ]' 00:29:44.303 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:44.559 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:44.559 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:44.559 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:44.559 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:44.559 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:44.559 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:44.559 13:13:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:44.816 13:13:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:29:45.748 13:13:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:45.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:45.748 13:13:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:45.748 13:13:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.748 13:13:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.748 13:13:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.748 13:13:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:45.748 13:13:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.748 13:13:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.748 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:29:45.748 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:45.748 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:45.749 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:45.749 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:45.749 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:45.749 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:29:45.749 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.749 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.749 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.749 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:45.749 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:46.314 00:29:46.314 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:46.314 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:46.314 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:46.571 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.571 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:46.571 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.571 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.571 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.571 13:13:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:46.571 { 00:29:46.571 "auth": { 00:29:46.571 "dhgroup": "ffdhe4096", 00:29:46.571 "digest": "sha384", 00:29:46.571 "state": "completed" 00:29:46.571 }, 00:29:46.571 "cntlid": 79, 00:29:46.571 "listen_address": { 00:29:46.571 "adrfam": "IPv4", 00:29:46.571 "traddr": "10.0.0.2", 00:29:46.571 "trsvcid": "4420", 00:29:46.571 "trtype": "TCP" 00:29:46.571 }, 00:29:46.571 "peer_address": { 00:29:46.571 "adrfam": "IPv4", 00:29:46.571 "traddr": "10.0.0.1", 00:29:46.571 "trsvcid": "46556", 00:29:46.571 "trtype": "TCP" 00:29:46.571 }, 00:29:46.571 "qid": 0, 00:29:46.571 "state": "enabled", 00:29:46.571 "thread": "nvmf_tgt_poll_group_000" 00:29:46.571 } 00:29:46.571 ]' 00:29:46.571 13:13:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:46.829 13:13:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:46.829 13:13:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:46.829 13:13:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:46.829 13:13:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:46.829 13:13:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:46.829 13:13:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:46.829 13:13:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:47.087 13:13:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:29:48.018 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:48.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:48.019 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:48.019 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.019 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.019 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.019 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:48.019 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:48.019 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:48.019 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:48.335 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:48.605 00:29:48.605 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:48.605 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:48.605 13:14:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:48.862 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.862 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:48.862 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.862 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.862 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.862 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:48.862 { 00:29:48.862 "auth": { 00:29:48.862 "dhgroup": "ffdhe6144", 00:29:48.862 "digest": "sha384", 00:29:48.862 "state": "completed" 00:29:48.862 }, 00:29:48.862 "cntlid": 81, 00:29:48.862 "listen_address": { 00:29:48.862 "adrfam": "IPv4", 00:29:48.862 "traddr": "10.0.0.2", 00:29:48.862 "trsvcid": "4420", 00:29:48.862 "trtype": "TCP" 00:29:48.862 }, 00:29:48.862 "peer_address": { 00:29:48.863 "adrfam": "IPv4", 00:29:48.863 "traddr": "10.0.0.1", 00:29:48.863 "trsvcid": "46592", 00:29:48.863 "trtype": "TCP" 00:29:48.863 }, 00:29:48.863 "qid": 0, 00:29:48.863 "state": "enabled", 00:29:48.863 "thread": "nvmf_tgt_poll_group_000" 00:29:48.863 } 00:29:48.863 ]' 00:29:48.863 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:48.863 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:48.863 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:49.120 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:49.120 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:49.120 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:49.120 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:49.120 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:49.377 13:14:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:50.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.311 13:14:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.878 00:29:50.878 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:50.878 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:50.878 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:51.135 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.135 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:51.135 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.135 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:51.135 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.135 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:51.135 { 00:29:51.135 "auth": { 00:29:51.135 "dhgroup": "ffdhe6144", 00:29:51.135 "digest": "sha384", 00:29:51.135 "state": "completed" 00:29:51.135 }, 00:29:51.135 "cntlid": 83, 00:29:51.135 "listen_address": { 00:29:51.135 "adrfam": "IPv4", 00:29:51.135 "traddr": "10.0.0.2", 00:29:51.135 "trsvcid": "4420", 00:29:51.135 "trtype": "TCP" 00:29:51.135 }, 00:29:51.135 "peer_address": { 00:29:51.135 "adrfam": "IPv4", 00:29:51.135 "traddr": "10.0.0.1", 00:29:51.135 "trsvcid": "46628", 00:29:51.135 "trtype": "TCP" 00:29:51.135 }, 00:29:51.135 "qid": 0, 00:29:51.135 "state": "enabled", 00:29:51.135 "thread": "nvmf_tgt_poll_group_000" 00:29:51.135 } 00:29:51.135 ]' 00:29:51.135 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:51.393 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:51.393 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:51.393 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:51.393 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:51.393 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:51.393 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:51.393 13:14:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:51.651 13:14:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:29:52.583 13:14:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:52.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:52.583 13:14:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:52.583 13:14:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.583 13:14:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.583 13:14:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.583 13:14:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:52.583 13:14:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:52.583 13:14:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:52.841 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:53.407 00:29:53.407 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:53.407 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:53.407 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:53.665 13:14:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:53.665 { 00:29:53.665 "auth": { 00:29:53.665 "dhgroup": "ffdhe6144", 00:29:53.665 "digest": "sha384", 00:29:53.665 "state": "completed" 00:29:53.665 }, 00:29:53.665 "cntlid": 85, 00:29:53.665 "listen_address": { 00:29:53.665 "adrfam": "IPv4", 00:29:53.665 "traddr": "10.0.0.2", 00:29:53.665 "trsvcid": "4420", 00:29:53.665 "trtype": "TCP" 00:29:53.665 }, 00:29:53.665 "peer_address": { 00:29:53.665 "adrfam": "IPv4", 00:29:53.665 "traddr": "10.0.0.1", 00:29:53.665 "trsvcid": "48958", 00:29:53.665 "trtype": "TCP" 00:29:53.665 }, 00:29:53.665 "qid": 0, 00:29:53.665 "state": "enabled", 00:29:53.665 "thread": "nvmf_tgt_poll_group_000" 00:29:53.665 } 00:29:53.665 ]' 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:53.665 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:53.924 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:53.924 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:53.924 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:54.182 13:14:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:29:54.749 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:54.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:54.749 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:54.749 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.749 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.749 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.749 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:54.749 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:54.749 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:55.315 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:29:55.315 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:55.315 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:55.315 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:55.315 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:55.316 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:55.316 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:29:55.316 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.316 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:55.316 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.316 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:55.316 13:14:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:55.573 00:29:55.573 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:55.573 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:55.573 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:56.162 { 00:29:56.162 "auth": { 00:29:56.162 "dhgroup": "ffdhe6144", 00:29:56.162 "digest": "sha384", 00:29:56.162 "state": "completed" 00:29:56.162 }, 00:29:56.162 "cntlid": 87, 00:29:56.162 "listen_address": { 00:29:56.162 "adrfam": "IPv4", 00:29:56.162 "traddr": "10.0.0.2", 00:29:56.162 "trsvcid": "4420", 00:29:56.162 "trtype": "TCP" 00:29:56.162 }, 00:29:56.162 "peer_address": { 00:29:56.162 "adrfam": "IPv4", 00:29:56.162 "traddr": "10.0.0.1", 00:29:56.162 "trsvcid": "48992", 00:29:56.162 "trtype": "TCP" 00:29:56.162 }, 00:29:56.162 "qid": 0, 00:29:56.162 "state": "enabled", 00:29:56.162 "thread": "nvmf_tgt_poll_group_000" 00:29:56.162 } 00:29:56.162 ]' 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:56.162 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:56.420 13:14:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:29:57.352 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:57.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:57.352 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:29:57.352 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.352 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:57.352 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.352 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:57.352 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:57.352 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:57.352 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:57.609 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:29:57.609 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:57.609 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:57.610 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:57.610 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:57.610 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:57.610 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:57.610 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.610 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:57.610 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.610 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:57.610 13:14:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:58.176 00:29:58.434 13:14:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:58.434 13:14:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:58.434 13:14:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:58.692 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.692 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:58.692 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.692 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:58.692 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.692 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:58.692 { 00:29:58.692 "auth": { 00:29:58.692 "dhgroup": "ffdhe8192", 00:29:58.692 "digest": "sha384", 00:29:58.692 "state": "completed" 00:29:58.692 }, 00:29:58.692 "cntlid": 89, 00:29:58.692 "listen_address": { 00:29:58.692 "adrfam": "IPv4", 00:29:58.692 "traddr": "10.0.0.2", 00:29:58.692 "trsvcid": "4420", 00:29:58.692 "trtype": "TCP" 00:29:58.692 }, 00:29:58.692 "peer_address": { 00:29:58.692 "adrfam": "IPv4", 00:29:58.692 "traddr": "10.0.0.1", 00:29:58.692 "trsvcid": "49008", 00:29:58.692 "trtype": "TCP" 00:29:58.692 }, 00:29:58.692 "qid": 0, 00:29:58.692 "state": "enabled", 00:29:58.692 "thread": "nvmf_tgt_poll_group_000" 00:29:58.692 } 00:29:58.692 ]' 00:29:58.692 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:58.692 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:58.692 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:58.949 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:58.949 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:58.949 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:58.949 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:58.949 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:59.206 13:14:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:30:00.138 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:00.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:00.138 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:00.138 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.138 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.138 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.138 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:00.138 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:00.138 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:00.395 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:30:00.395 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:00.395 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:00.395 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:00.395 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:00.395 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:00.395 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:00.396 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.396 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.396 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.396 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:00.396 13:14:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:00.960 00:30:00.960 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:00.960 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:00.960 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:01.235 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.235 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:01.235 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.235 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.235 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.235 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:01.235 { 00:30:01.235 "auth": { 00:30:01.235 "dhgroup": "ffdhe8192", 00:30:01.235 "digest": "sha384", 00:30:01.235 "state": "completed" 00:30:01.235 }, 00:30:01.235 "cntlid": 91, 00:30:01.236 "listen_address": { 00:30:01.236 "adrfam": "IPv4", 00:30:01.236 "traddr": "10.0.0.2", 00:30:01.236 "trsvcid": "4420", 00:30:01.236 "trtype": "TCP" 00:30:01.236 }, 00:30:01.236 "peer_address": { 00:30:01.236 "adrfam": "IPv4", 00:30:01.236 "traddr": "10.0.0.1", 00:30:01.236 "trsvcid": "49028", 00:30:01.236 "trtype": "TCP" 00:30:01.236 }, 00:30:01.236 "qid": 0, 00:30:01.236 "state": "enabled", 00:30:01.236 "thread": "nvmf_tgt_poll_group_000" 00:30:01.236 } 00:30:01.236 ]' 00:30:01.236 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:01.236 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:01.236 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:01.493 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:01.493 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:01.493 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:01.493 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:01.493 13:14:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:01.752 13:14:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:30:02.318 13:14:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:02.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:02.318 13:14:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:02.318 13:14:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.318 13:14:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:02.318 13:14:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.318 13:14:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:02.318 13:14:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:02.318 13:14:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:02.884 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:03.449 00:30:03.449 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:03.449 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:03.449 13:14:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:03.707 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.707 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:03.707 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.707 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.707 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.707 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:03.707 { 00:30:03.707 "auth": { 00:30:03.707 "dhgroup": "ffdhe8192", 00:30:03.707 "digest": "sha384", 00:30:03.707 "state": "completed" 00:30:03.707 }, 00:30:03.707 "cntlid": 93, 00:30:03.707 "listen_address": { 00:30:03.707 "adrfam": "IPv4", 00:30:03.707 "traddr": "10.0.0.2", 00:30:03.707 "trsvcid": "4420", 00:30:03.707 "trtype": "TCP" 00:30:03.707 }, 00:30:03.707 "peer_address": { 00:30:03.707 "adrfam": "IPv4", 00:30:03.707 "traddr": "10.0.0.1", 00:30:03.707 "trsvcid": "38532", 00:30:03.707 "trtype": "TCP" 00:30:03.707 }, 00:30:03.707 "qid": 0, 00:30:03.707 "state": "enabled", 00:30:03.707 "thread": "nvmf_tgt_poll_group_000" 00:30:03.707 } 00:30:03.707 ]' 00:30:03.707 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:03.707 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:03.707 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:03.965 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:03.965 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:03.965 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:03.965 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:03.965 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:04.223 13:14:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:30:05.157 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:05.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:05.157 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:05.157 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.157 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.157 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.157 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:05.157 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:05.157 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:05.417 13:14:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:05.986 00:30:05.986 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:05.986 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:05.986 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:06.243 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.243 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:06.243 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.243 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.243 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.243 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:06.243 { 00:30:06.243 "auth": { 00:30:06.243 "dhgroup": "ffdhe8192", 00:30:06.243 "digest": "sha384", 00:30:06.243 "state": "completed" 00:30:06.243 }, 00:30:06.243 "cntlid": 95, 00:30:06.243 "listen_address": { 00:30:06.243 "adrfam": "IPv4", 00:30:06.243 "traddr": "10.0.0.2", 00:30:06.243 "trsvcid": "4420", 00:30:06.244 "trtype": "TCP" 00:30:06.244 }, 00:30:06.244 "peer_address": { 00:30:06.244 "adrfam": "IPv4", 00:30:06.244 "traddr": "10.0.0.1", 00:30:06.244 "trsvcid": "38552", 00:30:06.244 "trtype": "TCP" 00:30:06.244 }, 00:30:06.244 "qid": 0, 00:30:06.244 "state": "enabled", 00:30:06.244 "thread": "nvmf_tgt_poll_group_000" 00:30:06.244 } 00:30:06.244 ]' 00:30:06.244 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:06.244 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:06.244 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:06.501 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:06.501 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:06.501 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:06.501 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:06.501 13:14:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:06.760 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:07.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:07.695 13:14:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:07.953 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:08.211 00:30:08.211 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:08.211 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:08.211 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:08.469 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.469 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:08.469 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.469 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.469 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.469 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:08.469 { 00:30:08.469 "auth": { 00:30:08.469 "dhgroup": "null", 00:30:08.469 "digest": "sha512", 00:30:08.469 "state": "completed" 00:30:08.469 }, 00:30:08.469 "cntlid": 97, 00:30:08.469 "listen_address": { 00:30:08.469 "adrfam": "IPv4", 00:30:08.469 "traddr": "10.0.0.2", 00:30:08.469 "trsvcid": "4420", 00:30:08.469 "trtype": "TCP" 00:30:08.469 }, 00:30:08.469 "peer_address": { 00:30:08.469 "adrfam": "IPv4", 00:30:08.469 "traddr": "10.0.0.1", 00:30:08.469 "trsvcid": "38574", 00:30:08.469 "trtype": "TCP" 00:30:08.469 }, 00:30:08.469 "qid": 0, 00:30:08.469 "state": "enabled", 00:30:08.469 "thread": "nvmf_tgt_poll_group_000" 00:30:08.469 } 00:30:08.469 ]' 00:30:08.469 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:08.470 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:08.470 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:08.726 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:30:08.727 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:08.727 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:08.727 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:08.727 13:14:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:08.984 13:14:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:30:09.549 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:09.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:09.549 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:09.549 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.549 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:09.807 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.807 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:09.807 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:09.807 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.094 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:10.095 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:10.375 00:30:10.375 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:10.375 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:10.375 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:10.633 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.633 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:10.633 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.633 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.633 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.633 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:10.633 { 00:30:10.633 "auth": { 00:30:10.633 "dhgroup": "null", 00:30:10.633 "digest": "sha512", 00:30:10.633 "state": "completed" 00:30:10.633 }, 00:30:10.633 "cntlid": 99, 00:30:10.633 "listen_address": { 00:30:10.633 "adrfam": "IPv4", 00:30:10.633 "traddr": "10.0.0.2", 00:30:10.633 "trsvcid": "4420", 00:30:10.633 "trtype": "TCP" 00:30:10.633 }, 00:30:10.633 "peer_address": { 00:30:10.633 "adrfam": "IPv4", 00:30:10.633 "traddr": "10.0.0.1", 00:30:10.633 "trsvcid": "38600", 00:30:10.633 "trtype": "TCP" 00:30:10.633 }, 00:30:10.633 "qid": 0, 00:30:10.633 "state": "enabled", 00:30:10.633 "thread": "nvmf_tgt_poll_group_000" 00:30:10.633 } 00:30:10.633 ]' 00:30:10.633 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:10.633 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:10.633 13:14:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:10.633 13:14:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:30:10.633 13:14:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:10.633 13:14:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:10.633 13:14:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:10.633 13:14:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:11.199 13:14:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:30:11.765 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:11.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:11.765 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:11.765 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.765 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:11.765 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.765 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:11.765 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:11.765 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:12.331 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:12.589 00:30:12.589 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:12.589 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:12.589 13:14:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:12.847 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.847 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:12.847 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.847 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.847 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.847 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:12.847 { 00:30:12.847 "auth": { 00:30:12.847 "dhgroup": "null", 00:30:12.847 "digest": "sha512", 00:30:12.847 "state": "completed" 00:30:12.847 }, 00:30:12.847 "cntlid": 101, 00:30:12.847 "listen_address": { 00:30:12.847 "adrfam": "IPv4", 00:30:12.847 "traddr": "10.0.0.2", 00:30:12.847 "trsvcid": "4420", 00:30:12.847 "trtype": "TCP" 00:30:12.847 }, 00:30:12.847 "peer_address": { 00:30:12.847 "adrfam": "IPv4", 00:30:12.847 "traddr": "10.0.0.1", 00:30:12.847 "trsvcid": "53724", 00:30:12.847 "trtype": "TCP" 00:30:12.847 }, 00:30:12.847 "qid": 0, 00:30:12.847 "state": "enabled", 00:30:12.847 "thread": "nvmf_tgt_poll_group_000" 00:30:12.847 } 00:30:12.847 ]' 00:30:12.847 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:12.847 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:12.847 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:13.106 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:30:13.106 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:13.106 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:13.106 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:13.106 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:13.364 13:14:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:14.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:14.297 13:14:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:14.863 00:30:14.863 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:14.863 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:14.863 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:15.121 { 00:30:15.121 "auth": { 00:30:15.121 "dhgroup": "null", 00:30:15.121 "digest": "sha512", 00:30:15.121 "state": "completed" 00:30:15.121 }, 00:30:15.121 "cntlid": 103, 00:30:15.121 "listen_address": { 00:30:15.121 "adrfam": "IPv4", 00:30:15.121 "traddr": "10.0.0.2", 00:30:15.121 "trsvcid": "4420", 00:30:15.121 "trtype": "TCP" 00:30:15.121 }, 00:30:15.121 "peer_address": { 00:30:15.121 "adrfam": "IPv4", 00:30:15.121 "traddr": "10.0.0.1", 00:30:15.121 "trsvcid": "53746", 00:30:15.121 "trtype": "TCP" 00:30:15.121 }, 00:30:15.121 "qid": 0, 00:30:15.121 "state": "enabled", 00:30:15.121 "thread": "nvmf_tgt_poll_group_000" 00:30:15.121 } 00:30:15.121 ]' 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:30:15.121 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:15.379 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:15.379 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:15.379 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:15.637 13:14:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:30:16.569 13:14:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:16.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:16.569 13:14:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:16.569 13:14:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.569 13:14:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:16.569 13:14:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.569 13:14:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:30:16.569 13:14:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:16.569 13:14:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:16.569 13:14:28 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:16.569 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:17.134 00:30:17.134 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:17.134 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:17.134 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:17.392 { 00:30:17.392 "auth": { 00:30:17.392 "dhgroup": "ffdhe2048", 00:30:17.392 "digest": "sha512", 00:30:17.392 "state": "completed" 00:30:17.392 }, 00:30:17.392 "cntlid": 105, 00:30:17.392 "listen_address": { 00:30:17.392 "adrfam": "IPv4", 00:30:17.392 "traddr": "10.0.0.2", 00:30:17.392 "trsvcid": "4420", 00:30:17.392 "trtype": "TCP" 00:30:17.392 }, 00:30:17.392 "peer_address": { 00:30:17.392 "adrfam": "IPv4", 00:30:17.392 "traddr": "10.0.0.1", 00:30:17.392 "trsvcid": "53778", 00:30:17.392 "trtype": "TCP" 00:30:17.392 }, 00:30:17.392 "qid": 0, 00:30:17.392 "state": "enabled", 00:30:17.392 "thread": "nvmf_tgt_poll_group_000" 00:30:17.392 } 00:30:17.392 ]' 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:17.392 13:14:29 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:17.651 13:14:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:30:18.585 13:14:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:18.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:18.585 13:14:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:18.585 13:14:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.585 13:14:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.585 13:14:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.585 13:14:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:18.585 13:14:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:18.585 13:14:30 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:18.843 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:19.409 00:30:19.409 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:19.409 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:19.409 13:14:31 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:19.667 { 00:30:19.667 "auth": { 00:30:19.667 "dhgroup": "ffdhe2048", 00:30:19.667 "digest": "sha512", 00:30:19.667 "state": "completed" 00:30:19.667 }, 00:30:19.667 "cntlid": 107, 00:30:19.667 "listen_address": { 00:30:19.667 "adrfam": "IPv4", 00:30:19.667 "traddr": "10.0.0.2", 00:30:19.667 "trsvcid": "4420", 00:30:19.667 "trtype": "TCP" 00:30:19.667 }, 00:30:19.667 "peer_address": { 00:30:19.667 "adrfam": "IPv4", 00:30:19.667 "traddr": "10.0.0.1", 00:30:19.667 "trsvcid": "53808", 00:30:19.667 "trtype": "TCP" 00:30:19.667 }, 00:30:19.667 "qid": 0, 00:30:19.667 "state": "enabled", 00:30:19.667 "thread": "nvmf_tgt_poll_group_000" 00:30:19.667 } 00:30:19.667 ]' 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:19.667 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:19.926 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:19.926 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:19.926 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:20.184 13:14:32 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:30:20.749 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:20.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:20.750 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:20.750 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.750 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.750 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.750 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:20.750 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:20.750 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.314 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.572 00:30:21.572 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:21.572 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:21.572 13:14:33 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:21.831 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.831 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:21.831 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.831 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:21.831 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.831 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:21.831 { 00:30:21.831 "auth": { 00:30:21.831 "dhgroup": "ffdhe2048", 00:30:21.831 "digest": "sha512", 00:30:21.831 "state": "completed" 00:30:21.831 }, 00:30:21.831 "cntlid": 109, 00:30:21.831 "listen_address": { 00:30:21.831 "adrfam": "IPv4", 00:30:21.831 "traddr": "10.0.0.2", 00:30:21.831 "trsvcid": "4420", 00:30:21.831 "trtype": "TCP" 00:30:21.831 }, 00:30:21.831 "peer_address": { 00:30:21.831 "adrfam": "IPv4", 00:30:21.831 "traddr": "10.0.0.1", 00:30:21.831 "trsvcid": "53830", 00:30:21.831 "trtype": "TCP" 00:30:21.831 }, 00:30:21.831 "qid": 0, 00:30:21.831 "state": "enabled", 00:30:21.831 "thread": "nvmf_tgt_poll_group_000" 00:30:21.831 } 00:30:21.831 ]' 00:30:21.831 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:21.831 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:21.831 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:22.089 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:22.089 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:22.089 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:22.089 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:22.089 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:22.347 13:14:34 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:23.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:23.346 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:23.347 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:30:23.347 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:23.347 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:23.347 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:30:23.347 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.347 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.347 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.347 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:23.347 13:14:35 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:23.625 00:30:23.883 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:23.883 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:23.883 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:23.883 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.883 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:23.883 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.883 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.141 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.141 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:24.141 { 00:30:24.141 "auth": { 00:30:24.141 "dhgroup": "ffdhe2048", 00:30:24.141 "digest": "sha512", 00:30:24.141 "state": "completed" 00:30:24.141 }, 00:30:24.141 "cntlid": 111, 00:30:24.141 "listen_address": { 00:30:24.142 "adrfam": "IPv4", 00:30:24.142 "traddr": "10.0.0.2", 00:30:24.142 "trsvcid": "4420", 00:30:24.142 "trtype": "TCP" 00:30:24.142 }, 00:30:24.142 "peer_address": { 00:30:24.142 "adrfam": "IPv4", 00:30:24.142 "traddr": "10.0.0.1", 00:30:24.142 "trsvcid": "54144", 00:30:24.142 "trtype": "TCP" 00:30:24.142 }, 00:30:24.142 "qid": 0, 00:30:24.142 "state": "enabled", 00:30:24.142 "thread": "nvmf_tgt_poll_group_000" 00:30:24.142 } 00:30:24.142 ]' 00:30:24.142 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:24.142 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:24.142 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:24.142 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:24.142 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:24.142 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:24.142 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:24.142 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:24.400 13:14:36 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:30:25.336 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:25.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:25.336 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:25.336 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.336 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:25.336 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.336 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:30:25.336 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:25.336 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:25.336 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:25.594 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:25.595 13:14:37 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:25.853 00:30:25.853 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:25.853 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:25.853 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:26.111 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.111 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:26.111 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.111 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:26.111 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.111 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:26.111 { 00:30:26.111 "auth": { 00:30:26.111 "dhgroup": "ffdhe3072", 00:30:26.111 "digest": "sha512", 00:30:26.111 "state": "completed" 00:30:26.111 }, 00:30:26.111 "cntlid": 113, 00:30:26.111 "listen_address": { 00:30:26.111 "adrfam": "IPv4", 00:30:26.111 "traddr": "10.0.0.2", 00:30:26.111 "trsvcid": "4420", 00:30:26.111 "trtype": "TCP" 00:30:26.111 }, 00:30:26.111 "peer_address": { 00:30:26.111 "adrfam": "IPv4", 00:30:26.111 "traddr": "10.0.0.1", 00:30:26.111 "trsvcid": "54162", 00:30:26.111 "trtype": "TCP" 00:30:26.111 }, 00:30:26.111 "qid": 0, 00:30:26.111 "state": "enabled", 00:30:26.111 "thread": "nvmf_tgt_poll_group_000" 00:30:26.111 } 00:30:26.111 ]' 00:30:26.111 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:26.369 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:26.369 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:26.369 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:26.369 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:26.369 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:26.369 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:26.369 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:26.627 13:14:38 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:30:27.560 13:14:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:27.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:27.560 13:14:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:27.560 13:14:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.560 13:14:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.560 13:14:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.560 13:14:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:27.560 13:14:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:27.560 13:14:39 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:27.818 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:28.075 00:30:28.075 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:28.075 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:28.075 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:28.334 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.334 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:28.334 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.334 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:28.334 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.334 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:28.334 { 00:30:28.334 "auth": { 00:30:28.334 "dhgroup": "ffdhe3072", 00:30:28.334 "digest": "sha512", 00:30:28.334 "state": "completed" 00:30:28.334 }, 00:30:28.334 "cntlid": 115, 00:30:28.334 "listen_address": { 00:30:28.334 "adrfam": "IPv4", 00:30:28.334 "traddr": "10.0.0.2", 00:30:28.334 "trsvcid": "4420", 00:30:28.334 "trtype": "TCP" 00:30:28.334 }, 00:30:28.334 "peer_address": { 00:30:28.334 "adrfam": "IPv4", 00:30:28.334 "traddr": "10.0.0.1", 00:30:28.334 "trsvcid": "54200", 00:30:28.334 "trtype": "TCP" 00:30:28.334 }, 00:30:28.334 "qid": 0, 00:30:28.334 "state": "enabled", 00:30:28.334 "thread": "nvmf_tgt_poll_group_000" 00:30:28.334 } 00:30:28.334 ]' 00:30:28.334 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:28.334 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:28.334 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:28.592 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:28.592 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:28.592 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:28.592 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:28.592 13:14:40 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:28.849 13:14:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:30:29.803 13:14:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:29.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:29.803 13:14:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:29.803 13:14:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.803 13:14:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:29.803 13:14:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.803 13:14:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:29.803 13:14:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:29.803 13:14:41 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:30.061 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:30.319 00:30:30.319 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:30.319 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:30.319 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:30.577 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.577 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:30.577 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.577 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.577 13:14:42 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.577 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:30.577 { 00:30:30.577 "auth": { 00:30:30.577 "dhgroup": "ffdhe3072", 00:30:30.577 "digest": "sha512", 00:30:30.577 "state": "completed" 00:30:30.577 }, 00:30:30.577 "cntlid": 117, 00:30:30.577 "listen_address": { 00:30:30.577 "adrfam": "IPv4", 00:30:30.577 "traddr": "10.0.0.2", 00:30:30.577 "trsvcid": "4420", 00:30:30.577 "trtype": "TCP" 00:30:30.577 }, 00:30:30.577 "peer_address": { 00:30:30.577 "adrfam": "IPv4", 00:30:30.577 "traddr": "10.0.0.1", 00:30:30.577 "trsvcid": "54232", 00:30:30.577 "trtype": "TCP" 00:30:30.577 }, 00:30:30.577 "qid": 0, 00:30:30.577 "state": "enabled", 00:30:30.577 "thread": "nvmf_tgt_poll_group_000" 00:30:30.577 } 00:30:30.577 ]' 00:30:30.577 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:30.835 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:30.835 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:30.835 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:30.835 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:30.835 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:30.835 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:30.835 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:31.093 13:14:43 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:30:32.055 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:32.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:32.055 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:32.055 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.055 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.055 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.055 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:32.055 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:32.055 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:32.333 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:32.592 00:30:32.592 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:32.592 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:32.592 13:14:44 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:32.854 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.854 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:32.854 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.854 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.854 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.854 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:32.854 { 00:30:32.854 "auth": { 00:30:32.854 "dhgroup": "ffdhe3072", 00:30:32.854 "digest": "sha512", 00:30:32.854 "state": "completed" 00:30:32.854 }, 00:30:32.854 "cntlid": 119, 00:30:32.854 "listen_address": { 00:30:32.854 "adrfam": "IPv4", 00:30:32.854 "traddr": "10.0.0.2", 00:30:32.854 "trsvcid": "4420", 00:30:32.854 "trtype": "TCP" 00:30:32.854 }, 00:30:32.854 "peer_address": { 00:30:32.854 "adrfam": "IPv4", 00:30:32.854 "traddr": "10.0.0.1", 00:30:32.855 "trsvcid": "60572", 00:30:32.855 "trtype": "TCP" 00:30:32.855 }, 00:30:32.855 "qid": 0, 00:30:32.855 "state": "enabled", 00:30:32.855 "thread": "nvmf_tgt_poll_group_000" 00:30:32.855 } 00:30:32.855 ]' 00:30:32.855 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:32.855 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:32.855 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:33.112 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:33.112 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:33.112 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:33.112 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:33.112 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:33.370 13:14:45 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:30:33.936 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:34.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:34.194 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:34.194 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.194 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.194 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.194 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:30:34.194 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:34.194 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:34.194 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:34.452 13:14:46 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:34.710 00:30:34.710 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:34.710 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:34.710 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:34.968 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.968 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:34.968 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.968 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:35.226 { 00:30:35.226 "auth": { 00:30:35.226 "dhgroup": "ffdhe4096", 00:30:35.226 "digest": "sha512", 00:30:35.226 "state": "completed" 00:30:35.226 }, 00:30:35.226 "cntlid": 121, 00:30:35.226 "listen_address": { 00:30:35.226 "adrfam": "IPv4", 00:30:35.226 "traddr": "10.0.0.2", 00:30:35.226 "trsvcid": "4420", 00:30:35.226 "trtype": "TCP" 00:30:35.226 }, 00:30:35.226 "peer_address": { 00:30:35.226 "adrfam": "IPv4", 00:30:35.226 "traddr": "10.0.0.1", 00:30:35.226 "trsvcid": "60594", 00:30:35.226 "trtype": "TCP" 00:30:35.226 }, 00:30:35.226 "qid": 0, 00:30:35.226 "state": "enabled", 00:30:35.226 "thread": "nvmf_tgt_poll_group_000" 00:30:35.226 } 00:30:35.226 ]' 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:35.226 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:35.483 13:14:47 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:30:36.416 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:36.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:36.416 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:36.416 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.416 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:36.416 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.416 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:36.416 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:36.416 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:36.674 13:14:48 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:36.932 00:30:36.932 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:36.932 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:36.932 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:37.191 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.191 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:37.191 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.191 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.191 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.191 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:37.191 { 00:30:37.191 "auth": { 00:30:37.191 "dhgroup": "ffdhe4096", 00:30:37.191 "digest": "sha512", 00:30:37.191 "state": "completed" 00:30:37.191 }, 00:30:37.191 "cntlid": 123, 00:30:37.191 "listen_address": { 00:30:37.191 "adrfam": "IPv4", 00:30:37.191 "traddr": "10.0.0.2", 00:30:37.191 "trsvcid": "4420", 00:30:37.191 "trtype": "TCP" 00:30:37.191 }, 00:30:37.191 "peer_address": { 00:30:37.191 "adrfam": "IPv4", 00:30:37.191 "traddr": "10.0.0.1", 00:30:37.191 "trsvcid": "60630", 00:30:37.191 "trtype": "TCP" 00:30:37.191 }, 00:30:37.191 "qid": 0, 00:30:37.191 "state": "enabled", 00:30:37.191 "thread": "nvmf_tgt_poll_group_000" 00:30:37.191 } 00:30:37.191 ]' 00:30:37.191 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:37.449 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:37.449 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:37.449 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:37.449 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:37.449 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:37.449 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:37.449 13:14:49 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:37.707 13:14:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:30:38.663 13:14:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:38.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:38.663 13:14:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:38.663 13:14:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.663 13:14:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:38.663 13:14:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.663 13:14:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:38.663 13:14:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:38.663 13:14:50 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:38.921 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:39.486 00:30:39.486 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:39.486 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:39.486 13:14:51 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:39.744 { 00:30:39.744 "auth": { 00:30:39.744 "dhgroup": "ffdhe4096", 00:30:39.744 "digest": "sha512", 00:30:39.744 "state": "completed" 00:30:39.744 }, 00:30:39.744 "cntlid": 125, 00:30:39.744 "listen_address": { 00:30:39.744 "adrfam": "IPv4", 00:30:39.744 "traddr": "10.0.0.2", 00:30:39.744 "trsvcid": "4420", 00:30:39.744 "trtype": "TCP" 00:30:39.744 }, 00:30:39.744 "peer_address": { 00:30:39.744 "adrfam": "IPv4", 00:30:39.744 "traddr": "10.0.0.1", 00:30:39.744 "trsvcid": "60650", 00:30:39.744 "trtype": "TCP" 00:30:39.744 }, 00:30:39.744 "qid": 0, 00:30:39.744 "state": "enabled", 00:30:39.744 "thread": "nvmf_tgt_poll_group_000" 00:30:39.744 } 00:30:39.744 ]' 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:39.744 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:40.002 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:40.002 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:40.002 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:40.259 13:14:52 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:30:40.823 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:40.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:40.823 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:40.823 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.823 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.823 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.823 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:40.823 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:40.823 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:41.429 13:14:53 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:41.690 00:30:41.690 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:41.690 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:41.690 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:42.283 { 00:30:42.283 "auth": { 00:30:42.283 "dhgroup": "ffdhe4096", 00:30:42.283 "digest": "sha512", 00:30:42.283 "state": "completed" 00:30:42.283 }, 00:30:42.283 "cntlid": 127, 00:30:42.283 "listen_address": { 00:30:42.283 "adrfam": "IPv4", 00:30:42.283 "traddr": "10.0.0.2", 00:30:42.283 "trsvcid": "4420", 00:30:42.283 "trtype": "TCP" 00:30:42.283 }, 00:30:42.283 "peer_address": { 00:30:42.283 "adrfam": "IPv4", 00:30:42.283 "traddr": "10.0.0.1", 00:30:42.283 "trsvcid": "60664", 00:30:42.283 "trtype": "TCP" 00:30:42.283 }, 00:30:42.283 "qid": 0, 00:30:42.283 "state": "enabled", 00:30:42.283 "thread": "nvmf_tgt_poll_group_000" 00:30:42.283 } 00:30:42.283 ]' 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:42.283 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:42.284 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:42.284 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:42.284 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:42.541 13:14:54 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:30:43.242 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:43.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:43.242 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:43.242 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.242 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:43.242 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.242 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:30:43.242 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:43.242 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:43.242 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:43.554 13:14:55 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:44.142 00:30:44.142 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:44.142 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:44.142 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:44.143 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.143 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:44.143 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.143 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:44.143 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.143 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:44.143 { 00:30:44.143 "auth": { 00:30:44.143 "dhgroup": "ffdhe6144", 00:30:44.143 "digest": "sha512", 00:30:44.143 "state": "completed" 00:30:44.143 }, 00:30:44.143 "cntlid": 129, 00:30:44.143 "listen_address": { 00:30:44.143 "adrfam": "IPv4", 00:30:44.143 "traddr": "10.0.0.2", 00:30:44.143 "trsvcid": "4420", 00:30:44.143 "trtype": "TCP" 00:30:44.143 }, 00:30:44.143 "peer_address": { 00:30:44.143 "adrfam": "IPv4", 00:30:44.143 "traddr": "10.0.0.1", 00:30:44.143 "trsvcid": "53140", 00:30:44.143 "trtype": "TCP" 00:30:44.143 }, 00:30:44.143 "qid": 0, 00:30:44.143 "state": "enabled", 00:30:44.143 "thread": "nvmf_tgt_poll_group_000" 00:30:44.143 } 00:30:44.143 ]' 00:30:44.143 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:44.399 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:44.399 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:44.399 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:44.399 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:44.399 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:44.399 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:44.399 13:14:56 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:44.657 13:14:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:30:45.589 13:14:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:45.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:45.589 13:14:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:45.589 13:14:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.589 13:14:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.589 13:14:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.589 13:14:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:45.589 13:14:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:45.589 13:14:57 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:45.847 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:45.848 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:46.413 00:30:46.413 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:46.413 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:46.413 13:14:58 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:46.670 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.670 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:46.670 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.670 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:46.670 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.670 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:46.670 { 00:30:46.670 "auth": { 00:30:46.670 "dhgroup": "ffdhe6144", 00:30:46.670 "digest": "sha512", 00:30:46.670 "state": "completed" 00:30:46.670 }, 00:30:46.670 "cntlid": 131, 00:30:46.670 "listen_address": { 00:30:46.670 "adrfam": "IPv4", 00:30:46.670 "traddr": "10.0.0.2", 00:30:46.670 "trsvcid": "4420", 00:30:46.670 "trtype": "TCP" 00:30:46.670 }, 00:30:46.670 "peer_address": { 00:30:46.670 "adrfam": "IPv4", 00:30:46.670 "traddr": "10.0.0.1", 00:30:46.670 "trsvcid": "53160", 00:30:46.670 "trtype": "TCP" 00:30:46.670 }, 00:30:46.670 "qid": 0, 00:30:46.670 "state": "enabled", 00:30:46.670 "thread": "nvmf_tgt_poll_group_000" 00:30:46.670 } 00:30:46.670 ]' 00:30:46.670 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:46.670 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:46.670 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:46.929 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:46.929 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:46.929 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:46.929 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:46.929 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:47.188 13:14:59 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:30:48.121 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:48.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:48.121 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:48.121 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.121 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.121 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.121 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:48.121 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:48.121 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:48.378 13:15:00 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:48.636 00:30:48.636 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:48.636 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:48.636 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:49.203 { 00:30:49.203 "auth": { 00:30:49.203 "dhgroup": "ffdhe6144", 00:30:49.203 "digest": "sha512", 00:30:49.203 "state": "completed" 00:30:49.203 }, 00:30:49.203 "cntlid": 133, 00:30:49.203 "listen_address": { 00:30:49.203 "adrfam": "IPv4", 00:30:49.203 "traddr": "10.0.0.2", 00:30:49.203 "trsvcid": "4420", 00:30:49.203 "trtype": "TCP" 00:30:49.203 }, 00:30:49.203 "peer_address": { 00:30:49.203 "adrfam": "IPv4", 00:30:49.203 "traddr": "10.0.0.1", 00:30:49.203 "trsvcid": "53186", 00:30:49.203 "trtype": "TCP" 00:30:49.203 }, 00:30:49.203 "qid": 0, 00:30:49.203 "state": "enabled", 00:30:49.203 "thread": "nvmf_tgt_poll_group_000" 00:30:49.203 } 00:30:49.203 ]' 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:49.203 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:49.460 13:15:01 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:30:50.393 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:50.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:50.393 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:50.393 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.393 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.393 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.393 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:50.393 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:50.393 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:50.651 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:30:50.651 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:50.651 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:50.651 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:50.652 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:50.652 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:50.652 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:30:50.652 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.652 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.652 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.652 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:50.652 13:15:02 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:50.907 00:30:50.907 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:50.907 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:50.907 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:51.472 { 00:30:51.472 "auth": { 00:30:51.472 "dhgroup": "ffdhe6144", 00:30:51.472 "digest": "sha512", 00:30:51.472 "state": "completed" 00:30:51.472 }, 00:30:51.472 "cntlid": 135, 00:30:51.472 "listen_address": { 00:30:51.472 "adrfam": "IPv4", 00:30:51.472 "traddr": "10.0.0.2", 00:30:51.472 "trsvcid": "4420", 00:30:51.472 "trtype": "TCP" 00:30:51.472 }, 00:30:51.472 "peer_address": { 00:30:51.472 "adrfam": "IPv4", 00:30:51.472 "traddr": "10.0.0.1", 00:30:51.472 "trsvcid": "53216", 00:30:51.472 "trtype": "TCP" 00:30:51.472 }, 00:30:51.472 "qid": 0, 00:30:51.472 "state": "enabled", 00:30:51.472 "thread": "nvmf_tgt_poll_group_000" 00:30:51.472 } 00:30:51.472 ]' 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:51.472 13:15:03 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:51.729 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:30:52.662 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:52.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:52.662 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:52.662 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.662 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:52.662 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.662 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:30:52.662 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:52.662 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:52.662 13:15:04 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:52.662 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:53.595 00:30:53.595 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:53.595 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:53.595 13:15:05 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:53.595 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.595 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:53.595 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.595 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:53.595 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.595 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:53.595 { 00:30:53.595 "auth": { 00:30:53.595 "dhgroup": "ffdhe8192", 00:30:53.595 "digest": "sha512", 00:30:53.595 "state": "completed" 00:30:53.595 }, 00:30:53.595 "cntlid": 137, 00:30:53.595 "listen_address": { 00:30:53.595 "adrfam": "IPv4", 00:30:53.595 "traddr": "10.0.0.2", 00:30:53.595 "trsvcid": "4420", 00:30:53.595 "trtype": "TCP" 00:30:53.595 }, 00:30:53.595 "peer_address": { 00:30:53.595 "adrfam": "IPv4", 00:30:53.595 "traddr": "10.0.0.1", 00:30:53.595 "trsvcid": "51824", 00:30:53.595 "trtype": "TCP" 00:30:53.595 }, 00:30:53.595 "qid": 0, 00:30:53.595 "state": "enabled", 00:30:53.595 "thread": "nvmf_tgt_poll_group_000" 00:30:53.595 } 00:30:53.595 ]' 00:30:53.595 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:53.853 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:53.853 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:53.853 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:53.853 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:53.853 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:53.853 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:53.853 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:54.111 13:15:06 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:30:54.697 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:54.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:54.954 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:54.954 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.954 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:54.954 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.954 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:54.954 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:54.954 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:55.211 13:15:07 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:55.776 00:30:55.776 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:55.776 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:55.776 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:56.036 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.036 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:56.036 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.036 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.036 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.036 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:56.036 { 00:30:56.036 "auth": { 00:30:56.036 "dhgroup": "ffdhe8192", 00:30:56.036 "digest": "sha512", 00:30:56.036 "state": "completed" 00:30:56.036 }, 00:30:56.036 "cntlid": 139, 00:30:56.036 "listen_address": { 00:30:56.036 "adrfam": "IPv4", 00:30:56.036 "traddr": "10.0.0.2", 00:30:56.036 "trsvcid": "4420", 00:30:56.036 "trtype": "TCP" 00:30:56.036 }, 00:30:56.036 "peer_address": { 00:30:56.036 "adrfam": "IPv4", 00:30:56.036 "traddr": "10.0.0.1", 00:30:56.036 "trsvcid": "51862", 00:30:56.036 "trtype": "TCP" 00:30:56.036 }, 00:30:56.036 "qid": 0, 00:30:56.036 "state": "enabled", 00:30:56.036 "thread": "nvmf_tgt_poll_group_000" 00:30:56.036 } 00:30:56.036 ]' 00:30:56.036 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:56.298 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:56.298 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:56.298 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:56.298 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:56.298 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:56.298 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:56.298 13:15:08 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:56.875 13:15:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:01:MWI4ZmMzNzEyY2I3MWE1MDdhODk0MjFiZGMwZjY5MzeYXTq0: --dhchap-ctrl-secret DHHC-1:02:ZTk2ZjJhMzk1MTA1ZTY0YWQxM2ViYWIzNDkxOGU3OTFhZjRmZDM3ZWRlZjMxMTE4J4BXyg==: 00:30:57.459 13:15:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:57.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:57.459 13:15:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:30:57.459 13:15:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.459 13:15:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:57.728 13:15:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.728 13:15:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:57.728 13:15:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:57.728 13:15:09 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:57.989 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:30:57.989 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:57.989 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:57.989 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:57.989 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:57.989 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:57.989 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:57.989 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.990 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:57.990 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.990 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:57.990 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:58.554 00:30:58.554 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:58.554 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:58.554 13:15:10 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:59.119 { 00:30:59.119 "auth": { 00:30:59.119 "dhgroup": "ffdhe8192", 00:30:59.119 "digest": "sha512", 00:30:59.119 "state": "completed" 00:30:59.119 }, 00:30:59.119 "cntlid": 141, 00:30:59.119 "listen_address": { 00:30:59.119 "adrfam": "IPv4", 00:30:59.119 "traddr": "10.0.0.2", 00:30:59.119 "trsvcid": "4420", 00:30:59.119 "trtype": "TCP" 00:30:59.119 }, 00:30:59.119 "peer_address": { 00:30:59.119 "adrfam": "IPv4", 00:30:59.119 "traddr": "10.0.0.1", 00:30:59.119 "trsvcid": "51882", 00:30:59.119 "trtype": "TCP" 00:30:59.119 }, 00:30:59.119 "qid": 0, 00:30:59.119 "state": "enabled", 00:30:59.119 "thread": "nvmf_tgt_poll_group_000" 00:30:59.119 } 00:30:59.119 ]' 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:59.119 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:59.377 13:15:11 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:02:NDNkZWIyMmVhMjAxNjQ4MzA0NDA0YTAwMTQyMDE5YzJmNWZjMGI1Zjk1YTI3NWE5UcXjwA==: --dhchap-ctrl-secret DHHC-1:01:ODAxYTVkOTU4NDMwYjY1YmU3YWUyMWQ3NDE5MzM2MDjdnCCt: 00:31:00.310 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:00.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:00.310 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:00.310 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.310 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:00.310 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.310 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:00.310 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:00.310 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:00.569 13:15:12 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:01.134 00:31:01.134 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:01.134 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:01.134 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:01.401 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.401 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:01.401 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.401 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:01.401 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.401 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:01.401 { 00:31:01.401 "auth": { 00:31:01.401 "dhgroup": "ffdhe8192", 00:31:01.401 "digest": "sha512", 00:31:01.401 "state": "completed" 00:31:01.401 }, 00:31:01.401 "cntlid": 143, 00:31:01.401 "listen_address": { 00:31:01.401 "adrfam": "IPv4", 00:31:01.401 "traddr": "10.0.0.2", 00:31:01.401 "trsvcid": "4420", 00:31:01.401 "trtype": "TCP" 00:31:01.401 }, 00:31:01.401 "peer_address": { 00:31:01.401 "adrfam": "IPv4", 00:31:01.401 "traddr": "10.0.0.1", 00:31:01.401 "trsvcid": "51892", 00:31:01.401 "trtype": "TCP" 00:31:01.401 }, 00:31:01.401 "qid": 0, 00:31:01.401 "state": "enabled", 00:31:01.401 "thread": "nvmf_tgt_poll_group_000" 00:31:01.401 } 00:31:01.401 ]' 00:31:01.401 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:01.401 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:01.401 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:01.668 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:01.668 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:01.668 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:01.668 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:01.668 13:15:13 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:01.924 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:02.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:02.490 13:15:14 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:03.055 13:15:15 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:03.987 00:31:03.987 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:03.987 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:03.987 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:04.245 { 00:31:04.245 "auth": { 00:31:04.245 "dhgroup": "ffdhe8192", 00:31:04.245 "digest": "sha512", 00:31:04.245 "state": "completed" 00:31:04.245 }, 00:31:04.245 "cntlid": 145, 00:31:04.245 "listen_address": { 00:31:04.245 "adrfam": "IPv4", 00:31:04.245 "traddr": "10.0.0.2", 00:31:04.245 "trsvcid": "4420", 00:31:04.245 "trtype": "TCP" 00:31:04.245 }, 00:31:04.245 "peer_address": { 00:31:04.245 "adrfam": "IPv4", 00:31:04.245 "traddr": "10.0.0.1", 00:31:04.245 "trsvcid": "37008", 00:31:04.245 "trtype": "TCP" 00:31:04.245 }, 00:31:04.245 "qid": 0, 00:31:04.245 "state": "enabled", 00:31:04.245 "thread": "nvmf_tgt_poll_group_000" 00:31:04.245 } 00:31:04.245 ]' 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:04.245 13:15:16 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:04.810 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:00:YzM2MWRlY2UxYTAyNWYxYTYxYTJjMDU3YWY0OTFjYTkyNDk3Y2M1NWNhNDhkODY1SkKgWg==: --dhchap-ctrl-secret DHHC-1:03:ZTEzYWZlY2U0MjFjZTZjMzQ2YjM4N2M2MWI5ZWZiODkxMjU4YTQ1ZTk1MTIwNjExM2E5ZGM1YzIzYjBkMWZlM+gREgI=: 00:31:05.749 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:05.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:05.749 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:05.749 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.749 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:31:05.750 13:15:17 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:31:06.314 2024/07/15 13:15:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:06.314 request: 00:31:06.314 { 00:31:06.314 "method": "bdev_nvme_attach_controller", 00:31:06.314 "params": { 00:31:06.314 "name": "nvme0", 00:31:06.314 "trtype": "tcp", 00:31:06.314 "traddr": "10.0.0.2", 00:31:06.314 "adrfam": "ipv4", 00:31:06.314 "trsvcid": "4420", 00:31:06.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:06.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:31:06.314 "prchk_reftag": false, 00:31:06.314 "prchk_guard": false, 00:31:06.314 "hdgst": false, 00:31:06.314 "ddgst": false, 00:31:06.314 "dhchap_key": "key2" 00:31:06.314 } 00:31:06.314 } 00:31:06.314 Got JSON-RPC error response 00:31:06.314 GoRPCClient: error on JSON-RPC call 00:31:06.314 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:31:06.314 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:06.314 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:06.314 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:06.314 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:06.315 13:15:18 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:06.880 2024/07/15 13:15:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:06.880 request: 00:31:06.880 { 00:31:06.880 "method": "bdev_nvme_attach_controller", 00:31:06.880 "params": { 00:31:06.880 "name": "nvme0", 00:31:06.880 "trtype": "tcp", 00:31:06.880 "traddr": "10.0.0.2", 00:31:06.880 "adrfam": "ipv4", 00:31:06.880 "trsvcid": "4420", 00:31:06.880 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:06.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:31:06.880 "prchk_reftag": false, 00:31:06.880 "prchk_guard": false, 00:31:06.880 "hdgst": false, 00:31:06.880 "ddgst": false, 00:31:06.880 "dhchap_key": "key1", 00:31:06.880 "dhchap_ctrlr_key": "ckey2" 00:31:06.880 } 00:31:06.880 } 00:31:06.880 Got JSON-RPC error response 00:31:06.880 GoRPCClient: error on JSON-RPC call 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key1 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.880 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:07.813 2024/07/15 13:15:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:07.813 request: 00:31:07.813 { 00:31:07.813 "method": "bdev_nvme_attach_controller", 00:31:07.813 "params": { 00:31:07.813 "name": "nvme0", 00:31:07.813 "trtype": "tcp", 00:31:07.813 "traddr": "10.0.0.2", 00:31:07.813 "adrfam": "ipv4", 00:31:07.813 "trsvcid": "4420", 00:31:07.813 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:07.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:31:07.813 "prchk_reftag": false, 00:31:07.813 "prchk_guard": false, 00:31:07.813 "hdgst": false, 00:31:07.813 "ddgst": false, 00:31:07.813 "dhchap_key": "key1", 00:31:07.813 "dhchap_ctrlr_key": "ckey1" 00:31:07.813 } 00:31:07.813 } 00:31:07.813 Got JSON-RPC error response 00:31:07.813 GoRPCClient: error on JSON-RPC call 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 109784 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 109784 ']' 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 109784 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:07.813 13:15:19 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109784 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109784' 00:31:07.813 killing process with pid 109784 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 109784 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 109784 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@485 -- # nvmfpid=114680 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@486 -- # waitforlisten 114680 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode --wait-for-rpc -L nvmf_auth 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 114680 ']' 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:07.813 13:15:20 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 114680 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 114680 ']' 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:09.191 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:09.449 13:15:21 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:10.382 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:10.382 { 00:31:10.382 "auth": { 00:31:10.382 "dhgroup": "ffdhe8192", 00:31:10.382 "digest": "sha512", 00:31:10.382 "state": "completed" 00:31:10.382 }, 00:31:10.382 "cntlid": 1, 00:31:10.382 "listen_address": { 00:31:10.382 "adrfam": "IPv4", 00:31:10.382 "traddr": "10.0.0.2", 00:31:10.382 "trsvcid": "4420", 00:31:10.382 "trtype": "TCP" 00:31:10.382 }, 00:31:10.382 "peer_address": { 00:31:10.382 "adrfam": "IPv4", 00:31:10.382 "traddr": "10.0.0.1", 00:31:10.382 "trsvcid": "37064", 00:31:10.382 "trtype": "TCP" 00:31:10.382 }, 00:31:10.382 "qid": 0, 00:31:10.382 "state": "enabled", 00:31:10.382 "thread": "nvmf_tgt_poll_group_000" 00:31:10.382 } 00:31:10.382 ]' 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:10.382 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:10.640 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:10.640 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:10.640 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:10.640 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:10.640 13:15:22 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:10.898 13:15:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid 2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-secret DHHC-1:03:ZjM0YzhhZGI4OWFkMjg4ZDI4NjFmYTgzODI5YjhlZWJiNjY1Y2RlZmEzYmYxOTZhMjc1NDcwMzUxYzIyYjE3OVgVqbs=: 00:31:11.830 13:15:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:11.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:11.830 13:15:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:11.830 13:15:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.830 13:15:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.830 13:15:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.830 13:15:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --dhchap-key key3 00:31:11.830 13:15:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.830 13:15:23 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.830 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.830 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:31:11.830 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:31:12.088 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:12.088 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:31:12.088 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:12.088 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:31:12.088 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.088 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:31:12.088 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.088 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:12.088 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:12.345 2024/07/15 13:15:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:12.345 request: 00:31:12.345 { 00:31:12.345 "method": "bdev_nvme_attach_controller", 00:31:12.345 "params": { 00:31:12.345 "name": "nvme0", 00:31:12.345 "trtype": "tcp", 00:31:12.345 "traddr": "10.0.0.2", 00:31:12.345 "adrfam": "ipv4", 00:31:12.345 "trsvcid": "4420", 00:31:12.345 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:12.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:31:12.345 "prchk_reftag": false, 00:31:12.345 "prchk_guard": false, 00:31:12.346 "hdgst": false, 00:31:12.346 "ddgst": false, 00:31:12.346 "dhchap_key": "key3" 00:31:12.346 } 00:31:12.346 } 00:31:12.346 Got JSON-RPC error response 00:31:12.346 GoRPCClient: error on JSON-RPC call 00:31:12.346 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:31:12.346 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:12.346 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:12.346 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:12.346 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:31:12.346 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:31:12.346 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:31:12.346 13:15:24 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:31:12.912 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:12.912 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:31:12.912 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:12.912 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:31:12.912 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.912 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:31:12.912 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.912 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:12.912 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:13.171 2024/07/15 13:15:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:13.171 request: 00:31:13.171 { 00:31:13.171 "method": "bdev_nvme_attach_controller", 00:31:13.171 "params": { 00:31:13.171 "name": "nvme0", 00:31:13.171 "trtype": "tcp", 00:31:13.171 "traddr": "10.0.0.2", 00:31:13.171 "adrfam": "ipv4", 00:31:13.171 "trsvcid": "4420", 00:31:13.171 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:13.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:31:13.171 "prchk_reftag": false, 00:31:13.171 "prchk_guard": false, 00:31:13.171 "hdgst": false, 00:31:13.171 "ddgst": false, 00:31:13.171 "dhchap_key": "key3" 00:31:13.171 } 00:31:13.171 } 00:31:13.171 Got JSON-RPC error response 00:31:13.171 GoRPCClient: error on JSON-RPC call 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:13.171 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:13.428 13:15:25 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:13.991 2024/07/15 13:15:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:13.991 request: 00:31:13.991 { 00:31:13.991 "method": "bdev_nvme_attach_controller", 00:31:13.991 "params": { 00:31:13.991 "name": "nvme0", 00:31:13.991 "trtype": "tcp", 00:31:13.991 "traddr": "10.0.0.2", 00:31:13.991 "adrfam": "ipv4", 00:31:13.991 "trsvcid": "4420", 00:31:13.991 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:13.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a", 00:31:13.991 "prchk_reftag": false, 00:31:13.991 "prchk_guard": false, 00:31:13.991 "hdgst": false, 00:31:13.991 "ddgst": false, 00:31:13.991 "dhchap_key": "key0", 00:31:13.991 "dhchap_ctrlr_key": "key1" 00:31:13.991 } 00:31:13.991 } 00:31:13.991 Got JSON-RPC error response 00:31:13.991 GoRPCClient: error on JSON-RPC call 00:31:13.991 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:31:13.991 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:13.991 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:13.991 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:13.991 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:31:13.991 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:31:14.248 00:31:14.248 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:31:14.248 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:31:14.248 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:14.534 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.534 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:14.534 13:15:26 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:14.809 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:31:14.809 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:31:14.809 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 109827 00:31:14.809 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 109827 ']' 00:31:14.809 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 109827 00:31:14.809 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:31:14.809 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:14.809 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109827 00:31:14.809 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:15.066 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:15.066 killing process with pid 109827 00:31:15.066 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109827' 00:31:15.066 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 109827 00:31:15.066 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 109827 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@492 -- # nvmfcleanup 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:15.324 rmmod nvme_tcp 00:31:15.324 rmmod nvme_fabrics 00:31:15.324 rmmod nvme_keyring 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@493 -- # '[' -n 114680 ']' 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@494 -- # killprocess 114680 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 114680 ']' 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 114680 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114680 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:15.324 killing process with pid 114680 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114680' 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 114680 00:31:15.324 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 114680 00:31:15.582 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:31:15.582 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:31:15.582 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:31:15.582 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:15.582 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@282 -- # remove_spdk_ns 00:31:15.582 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.582 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.582 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.11z /tmp/spdk.key-sha256.rVP /tmp/spdk.key-sha384.GHA /tmp/spdk.key-sha512.ynf /tmp/spdk.key-sha512.oEm /tmp/spdk.key-sha384.svW /tmp/spdk.key-sha256.UT1 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:31:15.583 00:31:15.583 real 3m9.927s 00:31:15.583 user 6m29.394s 00:31:15.583 sys 0m41.859s 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:15.583 ************************************ 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:15.583 END TEST nvmf_auth_target 00:31:15.583 ************************************ 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@63 -- # '[' tcp = tcp ']' 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@64 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.583 ************************************ 00:31:15.583 START TEST nvmf_bdevio_no_huge 00:31:15.583 ************************************ 00:31:15.583 13:15:27 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:31:15.583 * Looking for test storage... 00:31:15.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # prepare_net_devs 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # local -g is_hw=no 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # remove_spdk_ns 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.583 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # nvmf_veth_init 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:31:15.584 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:31:15.841 Cannot find device "nvmf_tgt_br" 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:31:15.841 Cannot find device "nvmf_tgt_br2" 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # true 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:31:15.841 Cannot find device "nvmf_tgt_br" 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:31:15.841 Cannot find device "nvmf_tgt_br2" 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:15.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:15.841 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:15.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:31:15.842 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:31:16.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:31:16.100 00:31:16.100 --- 10.0.0.2 ping statistics --- 00:31:16.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.100 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:31:16.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:16.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:31:16.100 00:31:16.100 --- 10.0.0.3 ping statistics --- 00:31:16.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.100 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:16.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:31:16.100 00:31:16.100 --- 10.0.0.1 ping statistics --- 00:31:16.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.100 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@437 -- # return 0 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@485 -- # nvmfpid=115097 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@486 -- # waitforlisten 115097 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 --interrupt-mode -m 0x78 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 115097 ']' 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:16.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:16.100 13:15:28 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:16.100 [2024-07-15 13:15:28.481309] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:16.100 [2024-07-15 13:15:28.482958] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:16.100 [2024-07-15 13:15:28.483056] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:31:16.359 [2024-07-15 13:15:28.640826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.359 [2024-07-15 13:15:28.825886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.359 [2024-07-15 13:15:28.825971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.359 [2024-07-15 13:15:28.825991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.359 [2024-07-15 13:15:28.826006] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.359 [2024-07-15 13:15:28.826034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.359 [2024-07-15 13:15:28.826159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:31:16.359 [2024-07-15 13:15:28.827379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:31:16.359 [2024-07-15 13:15:28.827535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:31:16.617 [2024-07-15 13:15:28.828207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.617 [2024-07-15 13:15:28.922659] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:16.617 [2024-07-15 13:15:28.922824] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:16.617 [2024-07-15 13:15:28.923206] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:16.617 [2024-07-15 13:15:28.923436] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:16.617 [2024-07-15 13:15:28.923855] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:17.182 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:17.182 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:31:17.182 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:31:17.182 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:17.182 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:17.440 [2024-07-15 13:15:29.669204] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:17.440 Malloc0 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:17.440 [2024-07-15 13:15:29.725453] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@536 -- # config=() 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@536 -- # local subsystem config 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:31:17.440 { 00:31:17.440 "params": { 00:31:17.440 "name": "Nvme$subsystem", 00:31:17.440 "trtype": "$TEST_TRANSPORT", 00:31:17.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.440 "adrfam": "ipv4", 00:31:17.440 "trsvcid": "$NVMF_PORT", 00:31:17.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.440 "hdgst": ${hdgst:-false}, 00:31:17.440 "ddgst": ${ddgst:-false} 00:31:17.440 }, 00:31:17.440 "method": "bdev_nvme_attach_controller" 00:31:17.440 } 00:31:17.440 EOF 00:31:17.440 )") 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # cat 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # jq . 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@561 -- # IFS=, 00:31:17.440 13:15:29 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:31:17.440 "params": { 00:31:17.440 "name": "Nvme1", 00:31:17.440 "trtype": "tcp", 00:31:17.440 "traddr": "10.0.0.2", 00:31:17.440 "adrfam": "ipv4", 00:31:17.440 "trsvcid": "4420", 00:31:17.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:17.440 "hdgst": false, 00:31:17.440 "ddgst": false 00:31:17.440 }, 00:31:17.440 "method": "bdev_nvme_attach_controller" 00:31:17.440 }' 00:31:17.440 [2024-07-15 13:15:29.797384] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:17.440 [2024-07-15 13:15:29.797525] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid115166 ] 00:31:17.698 [2024-07-15 13:15:29.950159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:17.698 [2024-07-15 13:15:30.134418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.698 [2024-07-15 13:15:30.134528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.698 [2024-07-15 13:15:30.135009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.956 I/O targets: 00:31:17.956 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:17.956 00:31:17.956 00:31:17.956 CUnit - A unit testing framework for C - Version 2.1-3 00:31:17.956 http://cunit.sourceforge.net/ 00:31:17.956 00:31:17.956 00:31:17.956 Suite: bdevio tests on: Nvme1n1 00:31:17.956 Test: blockdev write read block ...passed 00:31:17.956 Test: blockdev write zeroes read block ...passed 00:31:17.956 Test: blockdev write zeroes read no split ...passed 00:31:17.956 Test: blockdev write zeroes read split ...passed 00:31:18.215 Test: blockdev write zeroes read split partial ...passed 00:31:18.215 Test: blockdev reset ...[2024-07-15 13:15:30.426632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.215 [2024-07-15 13:15:30.427062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2408460 (9): Bad file descriptor 00:31:18.215 [2024-07-15 13:15:30.431518] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:18.215 passed 00:31:18.215 Test: blockdev write read 8 blocks ...passed 00:31:18.215 Test: blockdev write read size > 128k ...passed 00:31:18.215 Test: blockdev write read invalid size ...passed 00:31:18.215 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:18.215 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:18.215 Test: blockdev write read max offset ...passed 00:31:18.215 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:18.215 Test: blockdev writev readv 8 blocks ...passed 00:31:18.215 Test: blockdev writev readv 30 x 1block ...passed 00:31:18.215 Test: blockdev writev readv block ...passed 00:31:18.215 Test: blockdev writev readv size > 128k ...passed 00:31:18.215 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:18.215 Test: blockdev comparev and writev ...[2024-07-15 13:15:30.606960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:18.215 [2024-07-15 13:15:30.607048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:18.215 [2024-07-15 13:15:30.607087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:18.215 [2024-07-15 13:15:30.607108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:18.215 [2024-07-15 13:15:30.607647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:18.215 [2024-07-15 13:15:30.607690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:18.215 [2024-07-15 13:15:30.607723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:18.215 [2024-07-15 13:15:30.607743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:18.215 [2024-07-15 13:15:30.608323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:18.215 [2024-07-15 13:15:30.608365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:18.215 [2024-07-15 13:15:30.608397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:18.215 [2024-07-15 13:15:30.608417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:18.215 [2024-07-15 13:15:30.609161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:18.215 [2024-07-15 13:15:30.609213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:18.215 [2024-07-15 13:15:30.609248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:18.215 [2024-07-15 13:15:30.609267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:18.215 passed 00:31:18.472 Test: blockdev nvme passthru rw ...passed 00:31:18.472 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:15:30.692388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:18.472 [2024-07-15 13:15:30.692475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:18.473 [2024-07-15 13:15:30.692692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:18.473 [2024-07-15 13:15:30.692723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:18.473 [2024-07-15 13:15:30.692945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:18.473 [2024-07-15 13:15:30.692983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:18.473 [2024-07-15 13:15:30.693177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:18.473 [2024-07-15 13:15:30.693215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:18.473 passed 00:31:18.473 Test: blockdev nvme admin passthru ...passed 00:31:18.473 Test: blockdev copy ...passed 00:31:18.473 00:31:18.473 Run Summary: Type Total Ran Passed Failed Inactive 00:31:18.473 suites 1 1 n/a 0 0 00:31:18.473 tests 23 23 23 0 0 00:31:18.473 asserts 152 152 152 0 n/a 00:31:18.473 00:31:18.473 Elapsed time = 0.891 seconds 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # nvmfcleanup 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.731 rmmod nvme_tcp 00:31:18.731 rmmod nvme_fabrics 00:31:18.731 rmmod nvme_keyring 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # '[' -n 115097 ']' 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # killprocess 115097 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 115097 ']' 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 115097 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:18.731 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115097 00:31:18.989 killing process with pid 115097 00:31:18.989 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:31:18.989 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:31:18.989 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115097' 00:31:18.989 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 115097 00:31:18.989 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 115097 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@282 -- # remove_spdk_ns 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:31:19.247 00:31:19.247 real 0m3.718s 00:31:19.247 user 0m8.213s 00:31:19.247 sys 0m2.001s 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:19.247 ************************************ 00:31:19.247 END TEST nvmf_bdevio_no_huge 00:31:19.247 ************************************ 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@65 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:19.247 ************************************ 00:31:19.247 START TEST nvmf_tls 00:31:19.247 ************************************ 00:31:19.247 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:31:19.506 * Looking for test storage... 00:31:19.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.506 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@452 -- # prepare_net_devs 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@414 -- # local -g is_hw=no 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@416 -- # remove_spdk_ns 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@436 -- # nvmf_veth_init 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:31:19.507 Cannot find device "nvmf_tgt_br" 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@159 -- # true 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:31:19.507 Cannot find device "nvmf_tgt_br2" 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@160 -- # true 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:31:19.507 Cannot find device "nvmf_tgt_br" 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@162 -- # true 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:31:19.507 Cannot find device "nvmf_tgt_br2" 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@163 -- # true 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:19.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@166 -- # true 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:19.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@167 -- # true 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:19.507 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:19.768 13:15:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:31:19.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:31:19.768 00:31:19.768 --- 10.0.0.2 ping statistics --- 00:31:19.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.768 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:31:19.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:19.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:31:19.768 00:31:19.768 --- 10.0.0.3 ping statistics --- 00:31:19.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.768 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:19.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:31:19.768 00:31:19.768 --- 10.0.0.1 ping statistics --- 00:31:19.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.768 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@437 -- # return 0 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=115351 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 --wait-for-rpc 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 115351 00:31:19.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 115351 ']' 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:19.768 13:15:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:20.033 [2024-07-15 13:15:32.256834] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.033 [2024-07-15 13:15:32.258800] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:20.033 [2024-07-15 13:15:32.259051] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.033 [2024-07-15 13:15:32.403534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.033 [2024-07-15 13:15:32.482776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.033 [2024-07-15 13:15:32.482853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.033 [2024-07-15 13:15:32.482866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.033 [2024-07-15 13:15:32.482875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.033 [2024-07-15 13:15:32.482882] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.033 [2024-07-15 13:15:32.482919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.033 [2024-07-15 13:15:32.483269] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.964 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:20.964 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:20.964 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:31:20.964 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:20.964 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:20.964 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.964 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:31:20.964 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:31:21.221 true 00:31:21.478 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:21.478 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:31:21.736 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@73 -- # version=0 00:31:21.736 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:31:21.736 13:15:33 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:31:21.994 13:15:34 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:21.994 13:15:34 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:31:22.253 13:15:34 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@81 -- # version=13 00:31:22.253 13:15:34 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:31:22.253 13:15:34 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:31:22.512 13:15:34 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:31:22.512 13:15:34 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:22.770 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@89 -- # version=7 00:31:22.770 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:31:22.770 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:22.770 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:31:23.027 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:31:23.027 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:31:23.027 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:31:23.285 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:23.285 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:31:23.542 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:31:23.542 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:31:23.542 13:15:35 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:31:23.800 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:23.800 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@706 -- # local prefix key digest 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@708 -- # digest=1 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@709 -- # python - 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@706 -- # local prefix key digest 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@708 -- # key=ffeeddccbbaa99887766554433221100 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@708 -- # digest=1 00:31:24.057 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@709 -- # python - 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.OTqFNLemYn 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Pi8QA3XAok 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.OTqFNLemYn 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Pi8QA3XAok 00:31:24.316 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:31:24.573 13:15:36 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:31:24.830 [2024-07-15 13:15:37.132302] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:24.830 13:15:37 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.OTqFNLemYn 00:31:24.830 13:15:37 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.OTqFNLemYn 00:31:24.831 13:15:37 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:25.088 [2024-07-15 13:15:37.379742] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.088 13:15:37 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:25.345 13:15:37 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:25.603 [2024-07-15 13:15:37.863667] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:25.603 [2024-07-15 13:15:37.864072] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.603 13:15:37 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:25.860 malloc0 00:31:25.860 13:15:38 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:26.118 13:15:38 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OTqFNLemYn 00:31:26.388 [2024-07-15 13:15:38.607403] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:26.388 13:15:38 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.OTqFNLemYn 00:31:36.407 Initializing NVMe Controllers 00:31:36.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:36.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:36.407 Initialization complete. Launching workers. 00:31:36.407 ======================================================== 00:31:36.407 Latency(us) 00:31:36.408 Device Information : IOPS MiB/s Average min max 00:31:36.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8906.36 34.79 7187.79 1116.99 11087.17 00:31:36.408 ======================================================== 00:31:36.408 Total : 8906.36 34.79 7187.79 1116.99 11087.17 00:31:36.408 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OTqFNLemYn 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OTqFNLemYn' 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=115695 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 115695 /var/tmp/bdevperf.sock 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 115695 ']' 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:36.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.408 13:15:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:36.408 [2024-07-15 13:15:48.872964] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:36.408 [2024-07-15 13:15:48.873094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115695 ] 00:31:36.665 [2024-07-15 13:15:49.018149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.665 [2024-07-15 13:15:49.089476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.594 13:15:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.594 13:15:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:37.594 13:15:49 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OTqFNLemYn 00:31:37.851 [2024-07-15 13:15:50.174853] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:37.851 [2024-07-15 13:15:50.175046] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:37.851 TLSTESTn1 00:31:37.851 13:15:50 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:38.108 Running I/O for 10 seconds... 00:31:48.078 00:31:48.078 Latency(us) 00:31:48.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.078 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:48.078 Verification LBA range: start 0x0 length 0x2000 00:31:48.078 TLSTESTn1 : 10.02 3488.25 13.63 0.00 0.00 36615.27 1407.53 26810.18 00:31:48.078 =================================================================================================================== 00:31:48.078 Total : 3488.25 13.63 0.00 0.00 36615.27 1407.53 26810.18 00:31:48.078 0 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@45 -- # killprocess 115695 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 115695 ']' 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 115695 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115695 00:31:48.078 killing process with pid 115695 00:31:48.078 Received shutdown signal, test time was about 10.000000 seconds 00:31:48.078 00:31:48.078 Latency(us) 00:31:48.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.078 =================================================================================================================== 00:31:48.078 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115695' 00:31:48.078 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 115695 00:31:48.078 [2024-07-15 13:16:00.465103] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 115695 00:31:48.078 scheduled for removal in v24.09 hit 1 times 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pi8QA3XAok 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pi8QA3XAok 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:31:48.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pi8QA3XAok 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Pi8QA3XAok' 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=115842 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 115842 /var/tmp/bdevperf.sock 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 115842 ']' 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:48.336 13:16:00 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:48.336 [2024-07-15 13:16:00.711795] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:48.336 [2024-07-15 13:16:00.712202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115842 ] 00:31:48.594 [2024-07-15 13:16:00.849710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.594 [2024-07-15 13:16:00.909288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.526 13:16:01 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:49.526 13:16:01 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:49.526 13:16:01 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pi8QA3XAok 00:31:49.785 [2024-07-15 13:16:02.038591] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:49.785 [2024-07-15 13:16:02.038710] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:49.785 [2024-07-15 13:16:02.043713] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:49.785 [2024-07-15 13:16:02.044267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325ca0 (107): Transport endpoint is not connected 00:31:49.785 [2024-07-15 13:16:02.045252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1325ca0 (9): Bad file descriptor 00:31:49.785 [2024-07-15 13:16:02.046248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:49.785 [2024-07-15 13:16:02.046271] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:49.785 [2024-07-15 13:16:02.046286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:49.785 2024/07/15 13:16:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.Pi8QA3XAok subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:49.785 request: 00:31:49.785 { 00:31:49.785 "method": "bdev_nvme_attach_controller", 00:31:49.785 "params": { 00:31:49.785 "name": "TLSTEST", 00:31:49.785 "trtype": "tcp", 00:31:49.785 "traddr": "10.0.0.2", 00:31:49.785 "adrfam": "ipv4", 00:31:49.785 "trsvcid": "4420", 00:31:49.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:49.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:49.785 "prchk_reftag": false, 00:31:49.785 "prchk_guard": false, 00:31:49.785 "hdgst": false, 00:31:49.785 "ddgst": false, 00:31:49.785 "psk": "/tmp/tmp.Pi8QA3XAok" 00:31:49.785 } 00:31:49.785 } 00:31:49.785 Got JSON-RPC error response 00:31:49.785 GoRPCClient: error on JSON-RPC call 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@36 -- # killprocess 115842 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 115842 ']' 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 115842 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115842 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:49.785 killing process with pid 115842 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115842' 00:31:49.785 Received shutdown signal, test time was about 10.000000 seconds 00:31:49.785 00:31:49.785 Latency(us) 00:31:49.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.785 =================================================================================================================== 00:31:49.785 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:49.785 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 115842 00:31:49.785 [2024-07-15 13:16:02.086264] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 115842 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OTqFNLemYn 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OTqFNLemYn 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OTqFNLemYn 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OTqFNLemYn' 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=115888 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 115888 /var/tmp/bdevperf.sock 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 115888 ']' 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:49.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:49.786 13:16:02 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:50.043 [2024-07-15 13:16:02.291063] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:50.043 [2024-07-15 13:16:02.291160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115888 ] 00:31:50.043 [2024-07-15 13:16:02.421600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.043 [2024-07-15 13:16:02.481455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.975 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:50.975 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:50.975 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.OTqFNLemYn 00:31:51.232 [2024-07-15 13:16:03.472706] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:51.232 [2024-07-15 13:16:03.472835] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:51.232 [2024-07-15 13:16:03.477610] tcp.c: 940:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:31:51.232 [2024-07-15 13:16:03.477650] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:31:51.232 [2024-07-15 13:16:03.477708] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:51.232 [2024-07-15 13:16:03.478328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf02ca0 (107): Transport endpoint is not connected 00:31:51.232 [2024-07-15 13:16:03.479314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf02ca0 (9): Bad file descriptor 00:31:51.232 [2024-07-15 13:16:03.480310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:51.232 [2024-07-15 13:16:03.480335] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:51.232 [2024-07-15 13:16:03.480350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:51.232 2024/07/15 13:16:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.OTqFNLemYn subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:51.232 request: 00:31:51.232 { 00:31:51.232 "method": "bdev_nvme_attach_controller", 00:31:51.232 "params": { 00:31:51.232 "name": "TLSTEST", 00:31:51.232 "trtype": "tcp", 00:31:51.232 "traddr": "10.0.0.2", 00:31:51.232 "adrfam": "ipv4", 00:31:51.232 "trsvcid": "4420", 00:31:51.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:51.232 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:51.232 "prchk_reftag": false, 00:31:51.232 "prchk_guard": false, 00:31:51.232 "hdgst": false, 00:31:51.232 "ddgst": false, 00:31:51.232 "psk": "/tmp/tmp.OTqFNLemYn" 00:31:51.232 } 00:31:51.232 } 00:31:51.232 Got JSON-RPC error response 00:31:51.232 GoRPCClient: error on JSON-RPC call 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@36 -- # killprocess 115888 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 115888 ']' 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 115888 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115888 00:31:51.232 killing process with pid 115888 00:31:51.232 Received shutdown signal, test time was about 10.000000 seconds 00:31:51.232 00:31:51.232 Latency(us) 00:31:51.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.232 =================================================================================================================== 00:31:51.232 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115888' 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 115888 00:31:51.232 [2024-07-15 13:16:03.522551] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 115888 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:51.232 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OTqFNLemYn 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OTqFNLemYn 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:31:51.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OTqFNLemYn 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.OTqFNLemYn' 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=115928 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 115928 /var/tmp/bdevperf.sock 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 115928 ']' 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:51.233 13:16:03 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:51.490 [2024-07-15 13:16:03.748640] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:51.490 [2024-07-15 13:16:03.748828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115928 ] 00:31:51.490 [2024-07-15 13:16:03.899014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.748 [2024-07-15 13:16:03.965640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:52.315 13:16:04 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:52.315 13:16:04 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:52.315 13:16:04 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OTqFNLemYn 00:31:52.573 [2024-07-15 13:16:04.997053] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:52.573 [2024-07-15 13:16:04.997166] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:52.573 [2024-07-15 13:16:05.001993] tcp.c: 940:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:31:52.573 [2024-07-15 13:16:05.002049] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:31:52.573 [2024-07-15 13:16:05.002108] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:52.573 [2024-07-15 13:16:05.002685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7fca0 (107): Transport endpoint is not connected 00:31:52.573 [2024-07-15 13:16:05.003673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7fca0 (9): Bad file descriptor 00:31:52.573 [2024-07-15 13:16:05.004670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:52.573 [2024-07-15 13:16:05.004716] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:52.573 [2024-07-15 13:16:05.004743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:52.573 2024/07/15 13:16:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.OTqFNLemYn subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:52.573 request: 00:31:52.573 { 00:31:52.573 "method": "bdev_nvme_attach_controller", 00:31:52.573 "params": { 00:31:52.573 "name": "TLSTEST", 00:31:52.573 "trtype": "tcp", 00:31:52.573 "traddr": "10.0.0.2", 00:31:52.573 "adrfam": "ipv4", 00:31:52.573 "trsvcid": "4420", 00:31:52.573 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:52.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:52.573 "prchk_reftag": false, 00:31:52.573 "prchk_guard": false, 00:31:52.573 "hdgst": false, 00:31:52.573 "ddgst": false, 00:31:52.573 "psk": "/tmp/tmp.OTqFNLemYn" 00:31:52.573 } 00:31:52.573 } 00:31:52.573 Got JSON-RPC error response 00:31:52.573 GoRPCClient: error on JSON-RPC call 00:31:52.573 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@36 -- # killprocess 115928 00:31:52.573 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 115928 ']' 00:31:52.573 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 115928 00:31:52.573 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:52.573 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:52.573 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115928 00:31:52.831 killing process with pid 115928 00:31:52.831 Received shutdown signal, test time was about 10.000000 seconds 00:31:52.831 00:31:52.831 Latency(us) 00:31:52.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.831 =================================================================================================================== 00:31:52.831 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115928' 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 115928 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 115928 00:31:52.831 [2024-07-15 13:16:05.051725] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # psk= 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=115975 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 115975 /var/tmp/bdevperf.sock 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 115975 ']' 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:52.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:52.831 13:16:05 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:52.831 [2024-07-15 13:16:05.292625] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:52.831 [2024-07-15 13:16:05.292733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115975 ] 00:31:53.089 [2024-07-15 13:16:05.431921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.089 [2024-07-15 13:16:05.518693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.023 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:54.023 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:54.023 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:31:54.281 [2024-07-15 13:16:06.672430] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:54.281 [2024-07-15 13:16:06.674362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1909240 (9): Bad file descriptor 00:31:54.281 [2024-07-15 13:16:06.675358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:54.281 [2024-07-15 13:16:06.675400] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:54.281 [2024-07-15 13:16:06.675425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:54.282 2024/07/15 13:16:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:54.282 request: 00:31:54.282 { 00:31:54.282 "method": "bdev_nvme_attach_controller", 00:31:54.282 "params": { 00:31:54.282 "name": "TLSTEST", 00:31:54.282 "trtype": "tcp", 00:31:54.282 "traddr": "10.0.0.2", 00:31:54.282 "adrfam": "ipv4", 00:31:54.282 "trsvcid": "4420", 00:31:54.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:54.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:54.282 "prchk_reftag": false, 00:31:54.282 "prchk_guard": false, 00:31:54.282 "hdgst": false, 00:31:54.282 "ddgst": false 00:31:54.282 } 00:31:54.282 } 00:31:54.282 Got JSON-RPC error response 00:31:54.282 GoRPCClient: error on JSON-RPC call 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@36 -- # killprocess 115975 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 115975 ']' 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 115975 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115975 00:31:54.282 killing process with pid 115975 00:31:54.282 Received shutdown signal, test time was about 10.000000 seconds 00:31:54.282 00:31:54.282 Latency(us) 00:31:54.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.282 =================================================================================================================== 00:31:54.282 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115975' 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 115975 00:31:54.282 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 115975 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@158 -- # killprocess 115351 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 115351 ']' 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 115351 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115351 00:31:54.540 killing process with pid 115351 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115351' 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 115351 00:31:54.540 [2024-07-15 13:16:06.917467] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:54.540 13:16:06 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 115351 00:31:54.798 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:31:54.798 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:31:54.798 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@706 -- # local prefix key digest 00:31:54.798 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@708 -- # digest=2 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@709 -- # python - 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.JuGrLOIA2m 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.JuGrLOIA2m 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=116026 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 116026 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116026 ']' 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:54.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:54.799 13:16:07 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:54.799 [2024-07-15 13:16:07.210899] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.799 [2024-07-15 13:16:07.212021] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:54.799 [2024-07-15 13:16:07.212584] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.057 [2024-07-15 13:16:07.348432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.057 [2024-07-15 13:16:07.407976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.057 [2024-07-15 13:16:07.408028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.057 [2024-07-15 13:16:07.408040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.057 [2024-07-15 13:16:07.408048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.057 [2024-07-15 13:16:07.408055] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.057 [2024-07-15 13:16:07.408087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.057 [2024-07-15 13:16:07.454973] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.057 [2024-07-15 13:16:07.455294] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.988 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.988 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:55.988 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:31:55.988 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:55.988 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:55.988 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.988 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.JuGrLOIA2m 00:31:55.988 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JuGrLOIA2m 00:31:55.988 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:55.988 [2024-07-15 13:16:08.456662] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.246 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:56.505 13:16:08 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:56.767 [2024-07-15 13:16:09.060675] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:56.767 [2024-07-15 13:16:09.061014] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.767 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:57.025 malloc0 00:31:57.025 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:57.284 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JuGrLOIA2m 00:31:57.541 [2024-07-15 13:16:09.948574] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:57.541 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JuGrLOIA2m 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JuGrLOIA2m' 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=116129 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 116129 /var/tmp/bdevperf.sock 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116129 ']' 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:57.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:57.542 13:16:09 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:57.800 [2024-07-15 13:16:10.028641] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:31:57.800 [2024-07-15 13:16:10.028750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116129 ] 00:31:57.800 [2024-07-15 13:16:10.169286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.800 [2024-07-15 13:16:10.239745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:58.733 13:16:11 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:58.733 13:16:11 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:31:58.733 13:16:11 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JuGrLOIA2m 00:31:58.991 [2024-07-15 13:16:11.295011] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:58.991 [2024-07-15 13:16:11.295120] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:58.991 TLSTESTn1 00:31:58.991 13:16:11 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:59.249 Running I/O for 10 seconds... 00:32:09.218 00:32:09.218 Latency(us) 00:32:09.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.218 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:09.218 Verification LBA range: start 0x0 length 0x2000 00:32:09.218 TLSTESTn1 : 10.02 3502.58 13.68 0.00 0.00 36471.93 7626.01 35985.22 00:32:09.218 =================================================================================================================== 00:32:09.218 Total : 3502.58 13.68 0.00 0.00 36471.93 7626.01 35985.22 00:32:09.218 0 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@45 -- # killprocess 116129 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116129 ']' 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116129 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116129 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:09.218 killing process with pid 116129 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116129' 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116129 00:32:09.218 Received shutdown signal, test time was about 10.000000 seconds 00:32:09.218 00:32:09.218 Latency(us) 00:32:09.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.218 =================================================================================================================== 00:32:09.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:09.218 [2024-07-15 13:16:21.564649] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:09.218 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116129 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.JuGrLOIA2m 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JuGrLOIA2m 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JuGrLOIA2m 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JuGrLOIA2m 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JuGrLOIA2m' 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=116272 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 116272 /var/tmp/bdevperf.sock 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116272 ']' 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:09.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:09.476 13:16:21 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:09.476 [2024-07-15 13:16:21.785019] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:09.476 [2024-07-15 13:16:21.785111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116272 ] 00:32:09.476 [2024-07-15 13:16:21.920107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.733 [2024-07-15 13:16:21.980027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.733 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:09.733 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:09.733 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JuGrLOIA2m 00:32:09.992 [2024-07-15 13:16:22.330065] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:09.992 [2024-07-15 13:16:22.330154] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:32:09.992 [2024-07-15 13:16:22.330166] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.JuGrLOIA2m 00:32:09.992 2024/07/15 13:16:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.JuGrLOIA2m subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:32:09.992 request: 00:32:09.992 { 00:32:09.992 "method": "bdev_nvme_attach_controller", 00:32:09.992 "params": { 00:32:09.992 "name": "TLSTEST", 00:32:09.992 "trtype": "tcp", 00:32:09.992 "traddr": "10.0.0.2", 00:32:09.992 "adrfam": "ipv4", 00:32:09.992 "trsvcid": "4420", 00:32:09.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:09.992 "prchk_reftag": false, 00:32:09.992 "prchk_guard": false, 00:32:09.992 "hdgst": false, 00:32:09.992 "ddgst": false, 00:32:09.992 "psk": "/tmp/tmp.JuGrLOIA2m" 00:32:09.992 } 00:32:09.992 } 00:32:09.992 Got JSON-RPC error response 00:32:09.992 GoRPCClient: error on JSON-RPC call 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@36 -- # killprocess 116272 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116272 ']' 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116272 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116272 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:09.992 killing process with pid 116272 00:32:09.992 Received shutdown signal, test time was about 10.000000 seconds 00:32:09.992 00:32:09.992 Latency(us) 00:32:09.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.992 =================================================================================================================== 00:32:09.992 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116272' 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116272 00:32:09.992 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116272 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@37 -- # return 1 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@174 -- # killprocess 116026 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116026 ']' 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116026 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116026 00:32:10.250 killing process with pid 116026 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116026' 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116026 00:32:10.250 [2024-07-15 13:16:22.558824] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:10.250 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116026 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:10.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=116309 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 116309 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116309 ']' 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:10.508 13:16:22 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:10.508 [2024-07-15 13:16:22.799208] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:10.508 [2024-07-15 13:16:22.800437] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:10.508 [2024-07-15 13:16:22.800539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.508 [2024-07-15 13:16:22.938531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.766 [2024-07-15 13:16:22.998010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.766 [2024-07-15 13:16:22.998077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.766 [2024-07-15 13:16:22.998089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.766 [2024-07-15 13:16:22.998097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.766 [2024-07-15 13:16:22.998104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.766 [2024-07-15 13:16:22.998130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.766 [2024-07-15 13:16:23.047217] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:10.766 [2024-07-15 13:16:23.047522] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.JuGrLOIA2m 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JuGrLOIA2m 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.JuGrLOIA2m 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JuGrLOIA2m 00:32:11.392 13:16:23 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:11.651 [2024-07-15 13:16:24.027030] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.651 13:16:24 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:11.909 13:16:24 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:12.167 [2024-07-15 13:16:24.567000] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:12.167 [2024-07-15 13:16:24.567410] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.167 13:16:24 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:12.425 malloc0 00:32:12.425 13:16:24 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:12.682 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JuGrLOIA2m 00:32:12.939 [2024-07-15 13:16:25.382648] tcp.c:3661:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:32:12.939 [2024-07-15 13:16:25.382698] tcp.c:3747:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:32:12.939 [2024-07-15 13:16:25.382738] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:32:12.939 2024/07/15 13:16:25 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.JuGrLOIA2m], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:32:12.939 request: 00:32:12.939 { 00:32:12.939 "method": "nvmf_subsystem_add_host", 00:32:12.939 "params": { 00:32:12.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.939 "host": "nqn.2016-06.io.spdk:host1", 00:32:12.939 "psk": "/tmp/tmp.JuGrLOIA2m" 00:32:12.939 } 00:32:12.939 } 00:32:12.939 Got JSON-RPC error response 00:32:12.939 GoRPCClient: error on JSON-RPC call 00:32:12.939 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:32:12.939 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:12.939 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:12.939 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:12.939 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@180 -- # killprocess 116309 00:32:12.939 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116309 ']' 00:32:12.939 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116309 00:32:12.939 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:12.939 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116309 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116309' 00:32:13.197 killing process with pid 116309 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116309 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116309 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.JuGrLOIA2m 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=116421 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 116421 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116421 ']' 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:13.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:13.197 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:13.197 [2024-07-15 13:16:25.656939] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:13.197 [2024-07-15 13:16:25.658049] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:13.197 [2024-07-15 13:16:25.658146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.456 [2024-07-15 13:16:25.793959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.456 [2024-07-15 13:16:25.853850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.456 [2024-07-15 13:16:25.853901] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.456 [2024-07-15 13:16:25.853913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.456 [2024-07-15 13:16:25.853922] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.456 [2024-07-15 13:16:25.853929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.456 [2024-07-15 13:16:25.853956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.456 [2024-07-15 13:16:25.903453] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:13.456 [2024-07-15 13:16:25.903855] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:13.714 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:13.714 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:13.714 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:32:13.714 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:13.714 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:13.714 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.714 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.JuGrLOIA2m 00:32:13.714 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JuGrLOIA2m 00:32:13.714 13:16:25 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:13.971 [2024-07-15 13:16:26.218743] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.971 13:16:26 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:14.228 13:16:26 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:14.484 [2024-07-15 13:16:26.710664] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:14.484 [2024-07-15 13:16:26.711058] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.484 13:16:26 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:14.741 malloc0 00:32:14.742 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:14.999 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JuGrLOIA2m 00:32:15.257 [2024-07-15 13:16:27.570439] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=116505 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 116505 /var/tmp/bdevperf.sock 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116505 ']' 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:15.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:15.257 13:16:27 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:15.257 [2024-07-15 13:16:27.652461] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:15.257 [2024-07-15 13:16:27.652579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116505 ] 00:32:15.514 [2024-07-15 13:16:27.794662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.514 [2024-07-15 13:16:27.867030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:16.447 13:16:28 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.447 13:16:28 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:16.447 13:16:28 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JuGrLOIA2m 00:32:16.447 [2024-07-15 13:16:28.894009] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:16.447 [2024-07-15 13:16:28.894124] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:32:16.705 TLSTESTn1 00:32:16.705 13:16:28 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:16.963 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:32:16.963 "subsystems": [ 00:32:16.963 { 00:32:16.963 "subsystem": "keyring", 00:32:16.963 "config": [] 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "subsystem": "iobuf", 00:32:16.963 "config": [ 00:32:16.963 { 00:32:16.963 "method": "iobuf_set_options", 00:32:16.963 "params": { 00:32:16.963 "large_bufsize": 135168, 00:32:16.963 "large_pool_count": 1024, 00:32:16.963 "small_bufsize": 8192, 00:32:16.963 "small_pool_count": 8192 00:32:16.963 } 00:32:16.963 } 00:32:16.963 ] 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "subsystem": "sock", 00:32:16.963 "config": [ 00:32:16.963 { 00:32:16.963 "method": "sock_set_default_impl", 00:32:16.963 "params": { 00:32:16.963 "impl_name": "posix" 00:32:16.963 } 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "method": "sock_impl_set_options", 00:32:16.963 "params": { 00:32:16.963 "enable_ktls": false, 00:32:16.963 "enable_placement_id": 0, 00:32:16.963 "enable_quickack": false, 00:32:16.963 "enable_recv_pipe": true, 00:32:16.963 "enable_zerocopy_send_client": false, 00:32:16.963 "enable_zerocopy_send_server": true, 00:32:16.963 "impl_name": "ssl", 00:32:16.963 "recv_buf_size": 4096, 00:32:16.963 "send_buf_size": 4096, 00:32:16.963 "tls_version": 0, 00:32:16.963 "zerocopy_threshold": 0 00:32:16.963 } 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "method": "sock_impl_set_options", 00:32:16.963 "params": { 00:32:16.963 "enable_ktls": false, 00:32:16.963 "enable_placement_id": 0, 00:32:16.963 "enable_quickack": false, 00:32:16.963 "enable_recv_pipe": true, 00:32:16.963 "enable_zerocopy_send_client": false, 00:32:16.963 "enable_zerocopy_send_server": true, 00:32:16.963 "impl_name": "posix", 00:32:16.963 "recv_buf_size": 2097152, 00:32:16.963 "send_buf_size": 2097152, 00:32:16.963 "tls_version": 0, 00:32:16.963 "zerocopy_threshold": 0 00:32:16.963 } 00:32:16.963 } 00:32:16.963 ] 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "subsystem": "vmd", 00:32:16.963 "config": [] 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "subsystem": "accel", 00:32:16.963 "config": [ 00:32:16.963 { 00:32:16.963 "method": "accel_set_options", 00:32:16.963 "params": { 00:32:16.963 "buf_count": 2048, 00:32:16.963 "large_cache_size": 16, 00:32:16.963 "sequence_count": 2048, 00:32:16.963 "small_cache_size": 128, 00:32:16.963 "task_count": 2048 00:32:16.963 } 00:32:16.963 } 00:32:16.963 ] 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "subsystem": "bdev", 00:32:16.963 "config": [ 00:32:16.963 { 00:32:16.963 "method": "bdev_set_options", 00:32:16.963 "params": { 00:32:16.963 "bdev_auto_examine": true, 00:32:16.963 "bdev_io_cache_size": 256, 00:32:16.963 "bdev_io_pool_size": 65535, 00:32:16.963 "iobuf_large_cache_size": 16, 00:32:16.963 "iobuf_small_cache_size": 128 00:32:16.963 } 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "method": "bdev_raid_set_options", 00:32:16.963 "params": { 00:32:16.963 "process_window_size_kb": 1024 00:32:16.963 } 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "method": "bdev_iscsi_set_options", 00:32:16.963 "params": { 00:32:16.963 "timeout_sec": 30 00:32:16.963 } 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "method": "bdev_nvme_set_options", 00:32:16.963 "params": { 00:32:16.963 "action_on_timeout": "none", 00:32:16.963 "allow_accel_sequence": false, 00:32:16.963 "arbitration_burst": 0, 00:32:16.963 "bdev_retry_count": 3, 00:32:16.963 "ctrlr_loss_timeout_sec": 0, 00:32:16.963 "delay_cmd_submit": true, 00:32:16.963 "dhchap_dhgroups": [ 00:32:16.963 "null", 00:32:16.963 "ffdhe2048", 00:32:16.963 "ffdhe3072", 00:32:16.963 "ffdhe4096", 00:32:16.963 "ffdhe6144", 00:32:16.963 "ffdhe8192" 00:32:16.963 ], 00:32:16.963 "dhchap_digests": [ 00:32:16.963 "sha256", 00:32:16.963 "sha384", 00:32:16.963 "sha512" 00:32:16.963 ], 00:32:16.963 "disable_auto_failback": false, 00:32:16.963 "fast_io_fail_timeout_sec": 0, 00:32:16.963 "generate_uuids": false, 00:32:16.963 "high_priority_weight": 0, 00:32:16.963 "io_path_stat": false, 00:32:16.963 "io_queue_requests": 0, 00:32:16.963 "keep_alive_timeout_ms": 10000, 00:32:16.963 "low_priority_weight": 0, 00:32:16.963 "medium_priority_weight": 0, 00:32:16.963 "nvme_adminq_poll_period_us": 10000, 00:32:16.963 "nvme_error_stat": false, 00:32:16.963 "nvme_ioq_poll_period_us": 0, 00:32:16.963 "rdma_cm_event_timeout_ms": 0, 00:32:16.963 "rdma_max_cq_size": 0, 00:32:16.963 "rdma_srq_size": 0, 00:32:16.963 "reconnect_delay_sec": 0, 00:32:16.963 "timeout_admin_us": 0, 00:32:16.963 "timeout_us": 0, 00:32:16.963 "transport_ack_timeout": 0, 00:32:16.963 "transport_retry_count": 4, 00:32:16.963 "transport_tos": 0 00:32:16.963 } 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "method": "bdev_nvme_set_hotplug", 00:32:16.963 "params": { 00:32:16.963 "enable": false, 00:32:16.963 "period_us": 100000 00:32:16.963 } 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "method": "bdev_malloc_create", 00:32:16.963 "params": { 00:32:16.963 "block_size": 4096, 00:32:16.963 "name": "malloc0", 00:32:16.963 "num_blocks": 8192, 00:32:16.963 "optimal_io_boundary": 0, 00:32:16.963 "physical_block_size": 4096, 00:32:16.963 "uuid": "56538937-6883-4b34-9194-f42bce1c4fc8" 00:32:16.963 } 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "method": "bdev_wait_for_examine" 00:32:16.963 } 00:32:16.963 ] 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "subsystem": "nbd", 00:32:16.963 "config": [] 00:32:16.963 }, 00:32:16.963 { 00:32:16.963 "subsystem": "scheduler", 00:32:16.963 "config": [ 00:32:16.963 { 00:32:16.963 "method": "framework_set_scheduler", 00:32:16.963 "params": { 00:32:16.964 "name": "static" 00:32:16.964 } 00:32:16.964 } 00:32:16.964 ] 00:32:16.964 }, 00:32:16.964 { 00:32:16.964 "subsystem": "nvmf", 00:32:16.964 "config": [ 00:32:16.964 { 00:32:16.964 "method": "nvmf_set_config", 00:32:16.964 "params": { 00:32:16.964 "admin_cmd_passthru": { 00:32:16.964 "identify_ctrlr": false 00:32:16.964 }, 00:32:16.964 "discovery_filter": "match_any" 00:32:16.964 } 00:32:16.964 }, 00:32:16.964 { 00:32:16.964 "method": "nvmf_set_max_subsystems", 00:32:16.964 "params": { 00:32:16.964 "max_subsystems": 1024 00:32:16.964 } 00:32:16.964 }, 00:32:16.964 { 00:32:16.964 "method": "nvmf_set_crdt", 00:32:16.964 "params": { 00:32:16.964 "crdt1": 0, 00:32:16.964 "crdt2": 0, 00:32:16.964 "crdt3": 0 00:32:16.964 } 00:32:16.964 }, 00:32:16.964 { 00:32:16.964 "method": "nvmf_create_transport", 00:32:16.964 "params": { 00:32:16.964 "abort_timeout_sec": 1, 00:32:16.964 "ack_timeout": 0, 00:32:16.964 "buf_cache_size": 4294967295, 00:32:16.964 "c2h_success": false, 00:32:16.964 "data_wr_pool_size": 0, 00:32:16.964 "dif_insert_or_strip": false, 00:32:16.964 "in_capsule_data_size": 4096, 00:32:16.964 "io_unit_size": 131072, 00:32:16.964 "max_aq_depth": 128, 00:32:16.964 "max_io_qpairs_per_ctrlr": 127, 00:32:16.964 "max_io_size": 131072, 00:32:16.964 "max_queue_depth": 128, 00:32:16.964 "num_shared_buffers": 511, 00:32:16.964 "sock_priority": 0, 00:32:16.964 "trtype": "TCP", 00:32:16.964 "zcopy": false 00:32:16.964 } 00:32:16.964 }, 00:32:16.964 { 00:32:16.964 "method": "nvmf_create_subsystem", 00:32:16.964 "params": { 00:32:16.964 "allow_any_host": false, 00:32:16.964 "ana_reporting": false, 00:32:16.964 "max_cntlid": 65519, 00:32:16.964 "max_namespaces": 10, 00:32:16.964 "min_cntlid": 1, 00:32:16.964 "model_number": "SPDK bdev Controller", 00:32:16.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:16.964 "serial_number": "SPDK00000000000001" 00:32:16.964 } 00:32:16.964 }, 00:32:16.964 { 00:32:16.964 "method": "nvmf_subsystem_add_host", 00:32:16.964 "params": { 00:32:16.964 "host": "nqn.2016-06.io.spdk:host1", 00:32:16.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:16.964 "psk": "/tmp/tmp.JuGrLOIA2m" 00:32:16.964 } 00:32:16.964 }, 00:32:16.964 { 00:32:16.964 "method": "nvmf_subsystem_add_ns", 00:32:16.964 "params": { 00:32:16.964 "namespace": { 00:32:16.964 "bdev_name": "malloc0", 00:32:16.964 "nguid": "5653893768834B349194F42BCE1C4FC8", 00:32:16.964 "no_auto_visible": false, 00:32:16.964 "nsid": 1, 00:32:16.964 "uuid": "56538937-6883-4b34-9194-f42bce1c4fc8" 00:32:16.964 }, 00:32:16.964 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:32:16.964 } 00:32:16.964 }, 00:32:16.964 { 00:32:16.964 "method": "nvmf_subsystem_add_listener", 00:32:16.964 "params": { 00:32:16.964 "listen_address": { 00:32:16.964 "adrfam": "IPv4", 00:32:16.964 "traddr": "10.0.0.2", 00:32:16.964 "trsvcid": "4420", 00:32:16.964 "trtype": "TCP" 00:32:16.964 }, 00:32:16.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:16.964 "secure_channel": true 00:32:16.964 } 00:32:16.964 } 00:32:16.964 ] 00:32:16.964 } 00:32:16.964 ] 00:32:16.964 }' 00:32:16.964 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:32:17.222 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:32:17.222 "subsystems": [ 00:32:17.222 { 00:32:17.222 "subsystem": "keyring", 00:32:17.222 "config": [] 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "subsystem": "iobuf", 00:32:17.222 "config": [ 00:32:17.222 { 00:32:17.222 "method": "iobuf_set_options", 00:32:17.222 "params": { 00:32:17.222 "large_bufsize": 135168, 00:32:17.222 "large_pool_count": 1024, 00:32:17.222 "small_bufsize": 8192, 00:32:17.222 "small_pool_count": 8192 00:32:17.222 } 00:32:17.222 } 00:32:17.222 ] 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "subsystem": "sock", 00:32:17.222 "config": [ 00:32:17.222 { 00:32:17.222 "method": "sock_set_default_impl", 00:32:17.222 "params": { 00:32:17.222 "impl_name": "posix" 00:32:17.222 } 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "method": "sock_impl_set_options", 00:32:17.222 "params": { 00:32:17.222 "enable_ktls": false, 00:32:17.222 "enable_placement_id": 0, 00:32:17.222 "enable_quickack": false, 00:32:17.222 "enable_recv_pipe": true, 00:32:17.222 "enable_zerocopy_send_client": false, 00:32:17.222 "enable_zerocopy_send_server": true, 00:32:17.222 "impl_name": "ssl", 00:32:17.222 "recv_buf_size": 4096, 00:32:17.222 "send_buf_size": 4096, 00:32:17.222 "tls_version": 0, 00:32:17.222 "zerocopy_threshold": 0 00:32:17.222 } 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "method": "sock_impl_set_options", 00:32:17.222 "params": { 00:32:17.222 "enable_ktls": false, 00:32:17.222 "enable_placement_id": 0, 00:32:17.222 "enable_quickack": false, 00:32:17.222 "enable_recv_pipe": true, 00:32:17.222 "enable_zerocopy_send_client": false, 00:32:17.222 "enable_zerocopy_send_server": true, 00:32:17.222 "impl_name": "posix", 00:32:17.222 "recv_buf_size": 2097152, 00:32:17.222 "send_buf_size": 2097152, 00:32:17.222 "tls_version": 0, 00:32:17.222 "zerocopy_threshold": 0 00:32:17.222 } 00:32:17.222 } 00:32:17.222 ] 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "subsystem": "vmd", 00:32:17.222 "config": [] 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "subsystem": "accel", 00:32:17.222 "config": [ 00:32:17.222 { 00:32:17.222 "method": "accel_set_options", 00:32:17.222 "params": { 00:32:17.222 "buf_count": 2048, 00:32:17.222 "large_cache_size": 16, 00:32:17.222 "sequence_count": 2048, 00:32:17.222 "small_cache_size": 128, 00:32:17.222 "task_count": 2048 00:32:17.222 } 00:32:17.222 } 00:32:17.222 ] 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "subsystem": "bdev", 00:32:17.222 "config": [ 00:32:17.222 { 00:32:17.222 "method": "bdev_set_options", 00:32:17.222 "params": { 00:32:17.222 "bdev_auto_examine": true, 00:32:17.222 "bdev_io_cache_size": 256, 00:32:17.222 "bdev_io_pool_size": 65535, 00:32:17.222 "iobuf_large_cache_size": 16, 00:32:17.222 "iobuf_small_cache_size": 128 00:32:17.222 } 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "method": "bdev_raid_set_options", 00:32:17.222 "params": { 00:32:17.222 "process_window_size_kb": 1024 00:32:17.222 } 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "method": "bdev_iscsi_set_options", 00:32:17.222 "params": { 00:32:17.222 "timeout_sec": 30 00:32:17.222 } 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "method": "bdev_nvme_set_options", 00:32:17.222 "params": { 00:32:17.222 "action_on_timeout": "none", 00:32:17.222 "allow_accel_sequence": false, 00:32:17.222 "arbitration_burst": 0, 00:32:17.222 "bdev_retry_count": 3, 00:32:17.222 "ctrlr_loss_timeout_sec": 0, 00:32:17.222 "delay_cmd_submit": true, 00:32:17.222 "dhchap_dhgroups": [ 00:32:17.222 "null", 00:32:17.222 "ffdhe2048", 00:32:17.222 "ffdhe3072", 00:32:17.222 "ffdhe4096", 00:32:17.222 "ffdhe6144", 00:32:17.222 "ffdhe8192" 00:32:17.222 ], 00:32:17.222 "dhchap_digests": [ 00:32:17.222 "sha256", 00:32:17.222 "sha384", 00:32:17.222 "sha512" 00:32:17.222 ], 00:32:17.222 "disable_auto_failback": false, 00:32:17.222 "fast_io_fail_timeout_sec": 0, 00:32:17.222 "generate_uuids": false, 00:32:17.222 "high_priority_weight": 0, 00:32:17.222 "io_path_stat": false, 00:32:17.222 "io_queue_requests": 512, 00:32:17.222 "keep_alive_timeout_ms": 10000, 00:32:17.222 "low_priority_weight": 0, 00:32:17.222 "medium_priority_weight": 0, 00:32:17.222 "nvme_adminq_poll_period_us": 10000, 00:32:17.222 "nvme_error_stat": false, 00:32:17.222 "nvme_ioq_poll_period_us": 0, 00:32:17.222 "rdma_cm_event_timeout_ms": 0, 00:32:17.222 "rdma_max_cq_size": 0, 00:32:17.222 "rdma_srq_size": 0, 00:32:17.222 "reconnect_delay_sec": 0, 00:32:17.222 "timeout_admin_us": 0, 00:32:17.222 "timeout_us": 0, 00:32:17.222 "transport_ack_timeout": 0, 00:32:17.222 "transport_retry_count": 4, 00:32:17.222 "transport_tos": 0 00:32:17.222 } 00:32:17.222 }, 00:32:17.222 { 00:32:17.222 "method": "bdev_nvme_attach_controller", 00:32:17.222 "params": { 00:32:17.222 "adrfam": "IPv4", 00:32:17.222 "ctrlr_loss_timeout_sec": 0, 00:32:17.222 "ddgst": false, 00:32:17.222 "fast_io_fail_timeout_sec": 0, 00:32:17.222 "hdgst": false, 00:32:17.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:17.222 "name": "TLSTEST", 00:32:17.222 "prchk_guard": false, 00:32:17.222 "prchk_reftag": false, 00:32:17.222 "psk": "/tmp/tmp.JuGrLOIA2m", 00:32:17.223 "reconnect_delay_sec": 0, 00:32:17.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.223 "traddr": "10.0.0.2", 00:32:17.223 "trsvcid": "4420", 00:32:17.223 "trtype": "TCP" 00:32:17.223 } 00:32:17.223 }, 00:32:17.223 { 00:32:17.223 "method": "bdev_nvme_set_hotplug", 00:32:17.223 "params": { 00:32:17.223 "enable": false, 00:32:17.223 "period_us": 100000 00:32:17.223 } 00:32:17.223 }, 00:32:17.223 { 00:32:17.223 "method": "bdev_wait_for_examine" 00:32:17.223 } 00:32:17.223 ] 00:32:17.223 }, 00:32:17.223 { 00:32:17.223 "subsystem": "nbd", 00:32:17.223 "config": [] 00:32:17.223 } 00:32:17.223 ] 00:32:17.223 }' 00:32:17.223 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@199 -- # killprocess 116505 00:32:17.223 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116505 ']' 00:32:17.223 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116505 00:32:17.223 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:17.223 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:17.223 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116505 00:32:17.481 killing process with pid 116505 00:32:17.481 Received shutdown signal, test time was about 10.000000 seconds 00:32:17.481 00:32:17.481 Latency(us) 00:32:17.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.481 =================================================================================================================== 00:32:17.481 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116505' 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116505 00:32:17.481 [2024-07-15 13:16:29.699919] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116505 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@200 -- # killprocess 116421 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116421 ']' 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116421 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116421 00:32:17.481 killing process with pid 116421 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116421' 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116421 00:32:17.481 [2024-07-15 13:16:29.893753] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:17.481 13:16:29 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116421 00:32:17.740 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:32:17.740 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:32:17.740 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:17.740 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:32:17.740 "subsystems": [ 00:32:17.740 { 00:32:17.740 "subsystem": "keyring", 00:32:17.740 "config": [] 00:32:17.740 }, 00:32:17.740 { 00:32:17.740 "subsystem": "iobuf", 00:32:17.740 "config": [ 00:32:17.740 { 00:32:17.740 "method": "iobuf_set_options", 00:32:17.740 "params": { 00:32:17.740 "large_bufsize": 135168, 00:32:17.740 "large_pool_count": 1024, 00:32:17.740 "small_bufsize": 8192, 00:32:17.740 "small_pool_count": 8192 00:32:17.740 } 00:32:17.740 } 00:32:17.740 ] 00:32:17.740 }, 00:32:17.740 { 00:32:17.740 "subsystem": "sock", 00:32:17.740 "config": [ 00:32:17.740 { 00:32:17.740 "method": "sock_set_default_impl", 00:32:17.740 "params": { 00:32:17.740 "impl_name": "posix" 00:32:17.740 } 00:32:17.740 }, 00:32:17.740 { 00:32:17.740 "method": "sock_impl_set_options", 00:32:17.740 "params": { 00:32:17.740 "enable_ktls": false, 00:32:17.740 "enable_placement_id": 0, 00:32:17.740 "enable_quickack": false, 00:32:17.740 "enable_recv_pipe": true, 00:32:17.740 "enable_zerocopy_send_client": false, 00:32:17.740 "enable_zerocopy_send_server": true, 00:32:17.740 "impl_name": "ssl", 00:32:17.740 "recv_buf_size": 4096, 00:32:17.740 "send_buf_size": 4096, 00:32:17.740 "tls_version": 0, 00:32:17.740 "zerocopy_threshold": 0 00:32:17.740 } 00:32:17.740 }, 00:32:17.740 { 00:32:17.740 "method": "sock_impl_set_options", 00:32:17.740 "params": { 00:32:17.740 "enable_ktls": false, 00:32:17.740 "enable_placement_id": 0, 00:32:17.740 "enable_quickack": false, 00:32:17.740 "enable_recv_pipe": true, 00:32:17.740 "enable_zerocopy_send_client": false, 00:32:17.740 "enable_zerocopy_send_server": true, 00:32:17.740 "impl_name": "posix", 00:32:17.740 "recv_buf_size": 2097152, 00:32:17.740 "send_buf_size": 2097152, 00:32:17.740 "tls_version": 0, 00:32:17.740 "zerocopy_threshold": 0 00:32:17.740 } 00:32:17.740 } 00:32:17.740 ] 00:32:17.740 }, 00:32:17.740 { 00:32:17.740 "subsystem": "vmd", 00:32:17.740 "config": [] 00:32:17.740 }, 00:32:17.740 { 00:32:17.740 "subsystem": "accel", 00:32:17.740 "config": [ 00:32:17.740 { 00:32:17.740 "method": "accel_set_options", 00:32:17.740 "params": { 00:32:17.740 "buf_count": 2048, 00:32:17.740 "large_cache_size": 16, 00:32:17.740 "sequence_count": 2048, 00:32:17.740 "small_cache_size": 128, 00:32:17.740 "task_count": 2048 00:32:17.740 } 00:32:17.740 } 00:32:17.740 ] 00:32:17.740 }, 00:32:17.740 { 00:32:17.740 "subsystem": "bdev", 00:32:17.740 "config": [ 00:32:17.740 { 00:32:17.740 "method": "bdev_set_options", 00:32:17.740 "params": { 00:32:17.740 "bdev_auto_examine": true, 00:32:17.740 "bdev_io_cache_size": 256, 00:32:17.740 "bdev_io_pool_size": 65535, 00:32:17.740 "iobuf_large_cache_size": 16, 00:32:17.741 "iobuf_small_cache_size": 128 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "bdev_raid_set_options", 00:32:17.741 "params": { 00:32:17.741 "process_window_size_kb": 1024 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "bdev_iscsi_set_options", 00:32:17.741 "params": { 00:32:17.741 "timeout_sec": 30 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "bdev_nvme_set_options", 00:32:17.741 "params": { 00:32:17.741 "action_on_timeout": "none", 00:32:17.741 "allow_accel_sequence": false, 00:32:17.741 "arbitration_burst": 0, 00:32:17.741 "bdev_retry_count": 3, 00:32:17.741 "ctrlr_loss_timeout_sec": 0, 00:32:17.741 "delay_cmd_submit": true, 00:32:17.741 "dhchap_dhgroups": [ 00:32:17.741 "null", 00:32:17.741 "ffdhe2048", 00:32:17.741 "ffdhe3072", 00:32:17.741 "ffdhe4096", 00:32:17.741 "ffdhe6144", 00:32:17.741 "ffdhe8192" 00:32:17.741 ], 00:32:17.741 "dhchap_digests": [ 00:32:17.741 "sha256", 00:32:17.741 "sha384", 00:32:17.741 "sha512" 00:32:17.741 ], 00:32:17.741 "disable_auto_failback": false, 00:32:17.741 "fast_io_fail_timeout_sec": 0, 00:32:17.741 "generate_uuids": false, 00:32:17.741 "high_priority_weight": 0, 00:32:17.741 "io_path_stat": false, 00:32:17.741 "io_queue_requests": 0, 00:32:17.741 "keep_alive_timeout_ms": 10000, 00:32:17.741 "low_priority_weight": 0, 00:32:17.741 "medium_priority_weight": 0, 00:32:17.741 "nvme_adminq_poll_period_us": 10000, 00:32:17.741 "nvme_error_stat": false, 00:32:17.741 "nvme_ioq_poll_period_us": 0, 00:32:17.741 "rdma_cm_event_timeout_ms": 0, 00:32:17.741 "rdma_max_cq_size": 0, 00:32:17.741 "rdma_srq_size": 0, 00:32:17.741 "reconnect_delay_sec": 0, 00:32:17.741 "timeout_admin_us": 0, 00:32:17.741 "timeout_us": 0, 00:32:17.741 "transport_ack_timeout": 0, 00:32:17.741 "transport_retry_count": 4, 00:32:17.741 "transport_tos": 0 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "bdev_nvme_set_hotplug", 00:32:17.741 "params": { 00:32:17.741 "enable": false, 00:32:17.741 "period_us": 100000 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "bdev_malloc_create", 00:32:17.741 "params": { 00:32:17.741 "block_size": 4096, 00:32:17.741 "name": "malloc0", 00:32:17.741 "num_blocks": 8192, 00:32:17.741 "optimal_io_boundary": 0, 00:32:17.741 "physical_block_size": 4096, 00:32:17.741 "uuid": "56538937-6883-4b34-9194-f42bce1c4fc8" 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "bdev_wait_for_examine" 00:32:17.741 } 00:32:17.741 ] 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "subsystem": "nbd", 00:32:17.741 "config": [] 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "subsystem": "scheduler", 00:32:17.741 "config": [ 00:32:17.741 { 00:32:17.741 "method": "framework_set_scheduler", 00:32:17.741 "params": { 00:32:17.741 "name": "static" 00:32:17.741 } 00:32:17.741 } 00:32:17.741 ] 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "subsystem": "nvmf", 00:32:17.741 "config": [ 00:32:17.741 { 00:32:17.741 "method": "nvmf_set_config", 00:32:17.741 "params": { 00:32:17.741 "admin_cmd_passthru": { 00:32:17.741 "identify_ctrlr": false 00:32:17.741 }, 00:32:17.741 "discovery_filter": "match_any" 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "nvmf_set_max_subsystems", 00:32:17.741 "params": { 00:32:17.741 "max_subsystems": 1024 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "nvmf_set_crdt", 00:32:17.741 "params": { 00:32:17.741 "crdt1": 0, 00:32:17.741 "crdt2": 0, 00:32:17.741 "crdt3": 0 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "nvmf_create_transport", 00:32:17.741 "params": { 00:32:17.741 "abort_timeout_sec": 1, 00:32:17.741 "ack_timeout": 0, 00:32:17.741 "buf_cache_size": 4294967295, 00:32:17.741 "c2h_success": false, 00:32:17.741 "data_wr_pool_size": 0, 00:32:17.741 "dif_insert_or_strip": false, 00:32:17.741 "in_capsule_data_size": 4096, 00:32:17.741 "io_unit_size": 131072, 00:32:17.741 "max_aq_depth": 128, 00:32:17.741 "max_io_qpairs_per_ctrlr": 127, 00:32:17.741 "max_io_size": 131072, 00:32:17.741 "max_queue_depth": 128, 00:32:17.741 "num_shared_buffers": 511, 00:32:17.741 "sock_priority": 0, 00:32:17.741 "trtype": "TCP", 00:32:17.741 "zcopy": false 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "nvmf_create_subsystem", 00:32:17.741 "params": { 00:32:17.741 "allow_any_host": false, 00:32:17.741 "ana_reporting": false, 00:32:17.741 "max_cntlid": 65519, 00:32:17.741 "max_namespaces": 10, 00:32:17.741 "min_cntlid": 1, 00:32:17.741 "model_number": "SPDK bdev Controller", 00:32:17.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.741 "serial_number": "SPDK00000000000001" 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "nvmf_subsystem_add_host", 00:32:17.741 "params": { 00:32:17.741 "host": "nqn.2016-06.io.spdk:host1", 00:32:17.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.741 "psk": "/tmp/tmp.JuGrLOIA2m" 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "nvmf_subsystem_add_ns", 00:32:17.741 "params": { 00:32:17.741 "namespace": { 00:32:17.741 "bdev_name": "malloc0", 00:32:17.741 "nguid": "5653893768834B349194F42BCE1C4FC8", 00:32:17.741 "no_auto_visible": false, 00:32:17.741 "nsid": 1, 00:32:17.741 "uuid": "56538937-6883-4b34-9194-f42bce1c4fc8" 00:32:17.741 }, 00:32:17.741 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:32:17.741 } 00:32:17.741 }, 00:32:17.741 { 00:32:17.741 "method": "nvmf_subsystem_add_listener", 00:32:17.741 "params": { 00:32:17.741 "listen_address": { 00:32:17.741 "adrfam": "IPv4", 00:32:17.741 "traddr": "10.0.0.2", 00:32:17.741 "trsvcid": "4420", 00:32:17.741 "trtype": "TCP" 00:32:17.741 }, 00:32:17.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.741 "secure_channel": true 00:32:17.741 } 00:32:17.741 } 00:32:17.741 ] 00:32:17.741 } 00:32:17.741 ] 00:32:17.741 }' 00:32:17.741 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:17.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.741 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=116578 00:32:17.741 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 116578 00:32:17.741 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 -c /dev/fd/62 00:32:17.741 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116578 ']' 00:32:17.742 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.742 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:17.742 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.742 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:17.742 13:16:30 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:17.742 [2024-07-15 13:16:30.120845] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:17.742 [2024-07-15 13:16:30.122193] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:17.742 [2024-07-15 13:16:30.122377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.999 [2024-07-15 13:16:30.258643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.999 [2024-07-15 13:16:30.317709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.999 [2024-07-15 13:16:30.317805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.999 [2024-07-15 13:16:30.317818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.999 [2024-07-15 13:16:30.317826] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.999 [2024-07-15 13:16:30.317833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.999 [2024-07-15 13:16:30.317926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.999 [2024-07-15 13:16:30.319276] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:18.256 [2024-07-15 13:16:30.473004] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:18.256 [2024-07-15 13:16:30.512225] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.256 [2024-07-15 13:16:30.528060] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:18.256 [2024-07-15 13:16:30.544096] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:18.256 [2024-07-15 13:16:30.544302] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=116622 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 116622 /var/tmp/bdevperf.sock 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116622 ']' 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:18.822 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:32:18.822 "subsystems": [ 00:32:18.822 { 00:32:18.822 "subsystem": "keyring", 00:32:18.822 "config": [] 00:32:18.822 }, 00:32:18.822 { 00:32:18.822 "subsystem": "iobuf", 00:32:18.822 "config": [ 00:32:18.822 { 00:32:18.822 "method": "iobuf_set_options", 00:32:18.822 "params": { 00:32:18.822 "large_bufsize": 135168, 00:32:18.822 "large_pool_count": 1024, 00:32:18.822 "small_bufsize": 8192, 00:32:18.822 "small_pool_count": 8192 00:32:18.822 } 00:32:18.822 } 00:32:18.822 ] 00:32:18.822 }, 00:32:18.822 { 00:32:18.822 "subsystem": "sock", 00:32:18.822 "config": [ 00:32:18.822 { 00:32:18.822 "method": "sock_set_default_impl", 00:32:18.822 "params": { 00:32:18.822 "impl_name": "posix" 00:32:18.822 } 00:32:18.822 }, 00:32:18.822 { 00:32:18.822 "method": "sock_impl_set_options", 00:32:18.822 "params": { 00:32:18.822 "enable_ktls": false, 00:32:18.822 "enable_placement_id": 0, 00:32:18.822 "enable_quickack": false, 00:32:18.822 "enable_recv_pipe": true, 00:32:18.822 "enable_zerocopy_send_client": false, 00:32:18.822 "enable_zerocopy_send_server": true, 00:32:18.822 "impl_name": "ssl", 00:32:18.822 "recv_buf_size": 4096, 00:32:18.822 "send_buf_size": 4096, 00:32:18.822 "tls_version": 0, 00:32:18.822 "zerocopy_threshold": 0 00:32:18.822 } 00:32:18.822 }, 00:32:18.822 { 00:32:18.822 "method": "sock_impl_set_options", 00:32:18.822 "params": { 00:32:18.822 "enable_ktls": false, 00:32:18.822 "enable_placement_id": 0, 00:32:18.822 "enable_quickack": false, 00:32:18.822 "enable_recv_pipe": true, 00:32:18.822 "enable_zerocopy_send_client": false, 00:32:18.822 "enable_zerocopy_send_server": true, 00:32:18.822 "impl_name": "posix", 00:32:18.822 "recv_buf_size": 2097152, 00:32:18.822 "send_buf_size": 2097152, 00:32:18.822 "tls_version": 0, 00:32:18.822 "zerocopy_threshold": 0 00:32:18.822 } 00:32:18.822 } 00:32:18.822 ] 00:32:18.822 }, 00:32:18.822 { 00:32:18.822 "subsystem": "vmd", 00:32:18.822 "config": [] 00:32:18.822 }, 00:32:18.822 { 00:32:18.822 "subsystem": "accel", 00:32:18.822 "config": [ 00:32:18.822 { 00:32:18.822 "method": "accel_set_options", 00:32:18.822 "params": { 00:32:18.822 "buf_count": 2048, 00:32:18.822 "large_cache_size": 16, 00:32:18.823 "sequence_count": 2048, 00:32:18.823 "small_cache_size": 128, 00:32:18.823 "task_count": 2048 00:32:18.823 } 00:32:18.823 } 00:32:18.823 ] 00:32:18.823 }, 00:32:18.823 { 00:32:18.823 "subsystem": "bdev", 00:32:18.823 "config": [ 00:32:18.823 { 00:32:18.823 "method": "bdev_set_options", 00:32:18.823 "params": { 00:32:18.823 "bdev_auto_examine": true, 00:32:18.823 "bdev_io_cache_size": 256, 00:32:18.823 "bdev_io_pool_size": 65535, 00:32:18.823 "iobuf_large_cache_size": 16, 00:32:18.823 "iobuf_small_cache_size": 128 00:32:18.823 } 00:32:18.823 }, 00:32:18.823 { 00:32:18.823 "method": "bdev_raid_set_options", 00:32:18.823 "params": { 00:32:18.823 "process_window_size_kb": 1024 00:32:18.823 } 00:32:18.823 }, 00:32:18.823 { 00:32:18.823 "method": "bdev_iscsi_set_options", 00:32:18.823 "params": { 00:32:18.823 "timeout_sec": 30 00:32:18.823 } 00:32:18.823 }, 00:32:18.823 { 00:32:18.823 "method": "bdev_nvme_set_options", 00:32:18.823 "params": { 00:32:18.823 "action_on_timeout": "none", 00:32:18.823 "allow_accel_sequence": false, 00:32:18.823 "arbitration_burst": 0, 00:32:18.823 "bdev_retry_count": 3, 00:32:18.823 "ctrlr_loss_timeout_sec": 0, 00:32:18.823 "delay_cmd_submit": true, 00:32:18.823 "dhchap_dhgroups": [ 00:32:18.823 "null", 00:32:18.823 "ffdhe2048", 00:32:18.823 "ffdhe3072", 00:32:18.823 "ffdhe4096", 00:32:18.823 "ffdhe6144", 00:32:18.823 "ffdhe8192" 00:32:18.823 ], 00:32:18.823 "dhchap_digests": [ 00:32:18.823 "sha256", 00:32:18.823 "sha384", 00:32:18.823 "sha512" 00:32:18.823 ], 00:32:18.823 "disable_auto_failback": false, 00:32:18.823 "fast_io_fail_timeout_sec": 0, 00:32:18.823 "generate_uuids": false, 00:32:18.823 "high_priority_weight": 0, 00:32:18.823 "io_path_stat": false, 00:32:18.823 "io_queue_requests": 512, 00:32:18.823 "keep_alive_timeout_ms": 10000, 00:32:18.823 "low_priority_weight": 0, 00:32:18.823 "medium_priority_weight": 0, 00:32:18.823 "nvme_adminq_poll_period_us": 10000, 00:32:18.823 "nvme_error_stat": false, 00:32:18.823 "nvme_ioq_poll_period_us": 0, 00:32:18.823 "rdma_cm_event_timeout_ms": 0, 00:32:18.823 "rdma_max_cq_size": 0, 00:32:18.823 "rdma_srq_size": 0, 00:32:18.823 "reconnect_delay_sec": 0, 00:32:18.823 "timeout_admin_us": 0, 00:32:18.823 "timeout_us": 0, 00:32:18.823 "transport_ack_timeout": 0, 00:32:18.823 "transport_retry_count": 4, 00:32:18.823 "transport_tos": 0 00:32:18.823 } 00:32:18.823 }, 00:32:18.823 { 00:32:18.823 "method": "bdev_nvme_attach_controller", 00:32:18.823 "params": { 00:32:18.823 "adrfam": "IPv4", 00:32:18.823 "ctrlr_loss_timeout_sec": 0, 00:32:18.823 "ddgst": false, 00:32:18.823 "fast_io_fail_timeout_sec": 0, 00:32:18.823 "hdgst": false, 00:32:18.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:18.823 "name": "TLSTEST", 00:32:18.823 "prchk_guard": false, 00:32:18.823 "prchk_reftag": false, 00:32:18.823 "psk": "/tmp/tmp.JuGrLOIA2m", 00:32:18.823 "reconnect_delay_sec": 0, 00:32:18.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.823 "traddr": "10.0.0.2", 00:32:18.823 "trsvcid": "4420", 00:32:18.823 "trtype": "TCP" 00:32:18.823 } 00:32:18.823 }, 00:32:18.823 { 00:32:18.823 "method": "bdev_nvme_set_hotplug", 00:32:18.823 "params": { 00:32:18.823 "enable": false, 00:32:18.823 "period_us": 100000 00:32:18.823 } 00:32:18.823 }, 00:32:18.823 { 00:32:18.823 "method": "bdev_wait_for_examine" 00:32:18.823 } 00:32:18.823 ] 00:32:18.823 }, 00:32:18.823 { 00:32:18.823 "subsystem": "nbd", 00:32:18.823 "config": [] 00:32:18.823 } 00:32:18.823 ] 00:32:18.823 }' 00:32:18.823 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.823 13:16:31 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:18.823 [2024-07-15 13:16:31.264240] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:18.823 [2024-07-15 13:16:31.264339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116622 ] 00:32:19.081 [2024-07-15 13:16:31.404359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.081 [2024-07-15 13:16:31.479900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:19.338 [2024-07-15 13:16:31.615829] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:19.338 [2024-07-15 13:16:31.615964] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:32:19.905 13:16:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:19.905 13:16:32 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:19.905 13:16:32 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:32:20.281 Running I/O for 10 seconds... 00:32:30.242 00:32:30.242 Latency(us) 00:32:30.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.242 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:30.242 Verification LBA range: start 0x0 length 0x2000 00:32:30.242 TLSTESTn1 : 10.02 3680.60 14.38 0.00 0.00 34707.73 7149.38 38844.97 00:32:30.242 =================================================================================================================== 00:32:30.242 Total : 3680.60 14.38 0.00 0.00 34707.73 7149.38 38844.97 00:32:30.242 0 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@214 -- # killprocess 116622 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116622 ']' 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116622 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116622 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:30.242 killing process with pid 116622 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116622' 00:32:30.242 Received shutdown signal, test time was about 10.000000 seconds 00:32:30.242 00:32:30.242 Latency(us) 00:32:30.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.242 =================================================================================================================== 00:32:30.242 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116622 00:32:30.242 [2024-07-15 13:16:42.477807] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116622 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@215 -- # killprocess 116578 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116578 ']' 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116578 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116578 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:30.242 killing process with pid 116578 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116578' 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116578 00:32:30.242 [2024-07-15 13:16:42.666234] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:30.242 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116578 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=116757 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 116757 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116757 ']' 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:30.500 13:16:42 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:30.500 [2024-07-15 13:16:42.918223] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.500 [2024-07-15 13:16:42.919894] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:30.500 [2024-07-15 13:16:42.920008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.758 [2024-07-15 13:16:43.062208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.758 [2024-07-15 13:16:43.122356] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.758 [2024-07-15 13:16:43.122416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.758 [2024-07-15 13:16:43.122429] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.758 [2024-07-15 13:16:43.122437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.758 [2024-07-15 13:16:43.122444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.758 [2024-07-15 13:16:43.122472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.758 [2024-07-15 13:16:43.172964] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:30.758 [2024-07-15 13:16:43.173273] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.753 13:16:43 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:31.753 13:16:43 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:31.753 13:16:43 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:32:31.753 13:16:43 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:31.753 13:16:43 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:31.753 13:16:43 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.753 13:16:43 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.JuGrLOIA2m 00:32:31.753 13:16:43 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.JuGrLOIA2m 00:32:31.753 13:16:43 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:31.753 [2024-07-15 13:16:44.127080] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.753 13:16:44 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:32.011 13:16:44 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:32.269 [2024-07-15 13:16:44.659030] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:32.269 [2024-07-15 13:16:44.659328] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.269 13:16:44 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:32.527 malloc0 00:32:32.527 13:16:44 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JuGrLOIA2m 00:32:33.093 [2024-07-15 13:16:45.518954] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=116861 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 116861 /var/tmp/bdevperf.sock 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116861 ']' 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:33.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:33.093 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:33.351 [2024-07-15 13:16:45.582831] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:33.351 [2024-07-15 13:16:45.582924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116861 ] 00:32:33.351 [2024-07-15 13:16:45.714864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.351 [2024-07-15 13:16:45.793140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.609 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:33.609 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:33.609 13:16:45 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JuGrLOIA2m 00:32:33.867 13:16:46 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:32:34.124 [2024-07-15 13:16:46.396348] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:34.124 nvme0n1 00:32:34.124 13:16:46 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:34.382 Running I/O for 1 seconds... 00:32:35.316 00:32:35.316 Latency(us) 00:32:35.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.316 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:35.316 Verification LBA range: start 0x0 length 0x2000 00:32:35.316 nvme0n1 : 1.02 3519.28 13.75 0.00 0.00 35884.54 4200.26 25022.84 00:32:35.316 =================================================================================================================== 00:32:35.316 Total : 3519.28 13.75 0.00 0.00 35884.54 4200.26 25022.84 00:32:35.316 0 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@234 -- # killprocess 116861 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116861 ']' 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116861 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116861 00:32:35.316 killing process with pid 116861 00:32:35.316 Received shutdown signal, test time was about 1.000000 seconds 00:32:35.316 00:32:35.316 Latency(us) 00:32:35.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.316 =================================================================================================================== 00:32:35.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116861' 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116861 00:32:35.316 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116861 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@235 -- # killprocess 116757 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116757 ']' 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116757 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116757 00:32:35.574 killing process with pid 116757 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116757' 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116757 00:32:35.574 [2024-07-15 13:16:47.856530] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:35.574 13:16:47 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116757 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:35.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=116922 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 116922 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116922 ']' 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:35.574 13:16:48 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:35.831 [2024-07-15 13:16:48.106499] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:35.831 [2024-07-15 13:16:48.108178] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:35.831 [2024-07-15 13:16:48.108280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.831 [2024-07-15 13:16:48.250083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.089 [2024-07-15 13:16:48.338853] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.089 [2024-07-15 13:16:48.338946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.089 [2024-07-15 13:16:48.338967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.089 [2024-07-15 13:16:48.338981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.089 [2024-07-15 13:16:48.338994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.089 [2024-07-15 13:16:48.339036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.089 [2024-07-15 13:16:48.391458] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:36.089 [2024-07-15 13:16:48.391843] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:36.654 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:36.654 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:36.654 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:32:36.654 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:36.654 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:36.654 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.654 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:32:36.654 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.654 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:36.654 [2024-07-15 13:16:49.119866] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.912 malloc0 00:32:36.912 [2024-07-15 13:16:49.155818] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:36.912 [2024-07-15 13:16:49.156166] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=116968 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 116968 /var/tmp/bdevperf.sock 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 116968 ']' 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:36.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:36.912 13:16:49 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:36.912 [2024-07-15 13:16:49.253596] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:36.912 [2024-07-15 13:16:49.253725] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116968 ] 00:32:37.190 [2024-07-15 13:16:49.393375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.190 [2024-07-15 13:16:49.478030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.775 13:16:50 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:37.775 13:16:50 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:37.775 13:16:50 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JuGrLOIA2m 00:32:38.339 13:16:50 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:32:38.339 [2024-07-15 13:16:50.794422] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:38.597 nvme0n1 00:32:38.597 13:16:50 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:38.597 Running I/O for 1 seconds... 00:32:39.969 00:32:39.969 Latency(us) 00:32:39.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.969 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:39.969 Verification LBA range: start 0x0 length 0x2000 00:32:39.969 nvme0n1 : 1.02 3462.38 13.52 0.00 0.00 36497.05 7417.48 44802.79 00:32:39.969 =================================================================================================================== 00:32:39.969 Total : 3462.38 13.52 0.00 0.00 36497.05 7417.48 44802.79 00:32:39.969 0 00:32:39.969 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:32:39.969 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.969 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:39.969 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.969 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:32:39.969 "subsystems": [ 00:32:39.969 { 00:32:39.969 "subsystem": "keyring", 00:32:39.969 "config": [ 00:32:39.969 { 00:32:39.969 "method": "keyring_file_add_key", 00:32:39.969 "params": { 00:32:39.969 "name": "key0", 00:32:39.969 "path": "/tmp/tmp.JuGrLOIA2m" 00:32:39.969 } 00:32:39.969 } 00:32:39.969 ] 00:32:39.969 }, 00:32:39.969 { 00:32:39.969 "subsystem": "iobuf", 00:32:39.969 "config": [ 00:32:39.969 { 00:32:39.969 "method": "iobuf_set_options", 00:32:39.969 "params": { 00:32:39.969 "large_bufsize": 135168, 00:32:39.969 "large_pool_count": 1024, 00:32:39.969 "small_bufsize": 8192, 00:32:39.969 "small_pool_count": 8192 00:32:39.969 } 00:32:39.969 } 00:32:39.969 ] 00:32:39.969 }, 00:32:39.969 { 00:32:39.969 "subsystem": "sock", 00:32:39.969 "config": [ 00:32:39.969 { 00:32:39.969 "method": "sock_set_default_impl", 00:32:39.969 "params": { 00:32:39.969 "impl_name": "posix" 00:32:39.969 } 00:32:39.969 }, 00:32:39.969 { 00:32:39.969 "method": "sock_impl_set_options", 00:32:39.969 "params": { 00:32:39.969 "enable_ktls": false, 00:32:39.969 "enable_placement_id": 0, 00:32:39.969 "enable_quickack": false, 00:32:39.969 "enable_recv_pipe": true, 00:32:39.969 "enable_zerocopy_send_client": false, 00:32:39.969 "enable_zerocopy_send_server": true, 00:32:39.969 "impl_name": "ssl", 00:32:39.969 "recv_buf_size": 4096, 00:32:39.969 "send_buf_size": 4096, 00:32:39.969 "tls_version": 0, 00:32:39.969 "zerocopy_threshold": 0 00:32:39.969 } 00:32:39.969 }, 00:32:39.969 { 00:32:39.969 "method": "sock_impl_set_options", 00:32:39.969 "params": { 00:32:39.969 "enable_ktls": false, 00:32:39.969 "enable_placement_id": 0, 00:32:39.969 "enable_quickack": false, 00:32:39.969 "enable_recv_pipe": true, 00:32:39.969 "enable_zerocopy_send_client": false, 00:32:39.969 "enable_zerocopy_send_server": true, 00:32:39.969 "impl_name": "posix", 00:32:39.969 "recv_buf_size": 2097152, 00:32:39.969 "send_buf_size": 2097152, 00:32:39.969 "tls_version": 0, 00:32:39.969 "zerocopy_threshold": 0 00:32:39.969 } 00:32:39.969 } 00:32:39.969 ] 00:32:39.969 }, 00:32:39.969 { 00:32:39.969 "subsystem": "vmd", 00:32:39.969 "config": [] 00:32:39.969 }, 00:32:39.969 { 00:32:39.969 "subsystem": "accel", 00:32:39.969 "config": [ 00:32:39.969 { 00:32:39.969 "method": "accel_set_options", 00:32:39.969 "params": { 00:32:39.969 "buf_count": 2048, 00:32:39.969 "large_cache_size": 16, 00:32:39.969 "sequence_count": 2048, 00:32:39.969 "small_cache_size": 128, 00:32:39.969 "task_count": 2048 00:32:39.969 } 00:32:39.969 } 00:32:39.970 ] 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "subsystem": "bdev", 00:32:39.970 "config": [ 00:32:39.970 { 00:32:39.970 "method": "bdev_set_options", 00:32:39.970 "params": { 00:32:39.970 "bdev_auto_examine": true, 00:32:39.970 "bdev_io_cache_size": 256, 00:32:39.970 "bdev_io_pool_size": 65535, 00:32:39.970 "iobuf_large_cache_size": 16, 00:32:39.970 "iobuf_small_cache_size": 128 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "bdev_raid_set_options", 00:32:39.970 "params": { 00:32:39.970 "process_window_size_kb": 1024 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "bdev_iscsi_set_options", 00:32:39.970 "params": { 00:32:39.970 "timeout_sec": 30 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "bdev_nvme_set_options", 00:32:39.970 "params": { 00:32:39.970 "action_on_timeout": "none", 00:32:39.970 "allow_accel_sequence": false, 00:32:39.970 "arbitration_burst": 0, 00:32:39.970 "bdev_retry_count": 3, 00:32:39.970 "ctrlr_loss_timeout_sec": 0, 00:32:39.970 "delay_cmd_submit": true, 00:32:39.970 "dhchap_dhgroups": [ 00:32:39.970 "null", 00:32:39.970 "ffdhe2048", 00:32:39.970 "ffdhe3072", 00:32:39.970 "ffdhe4096", 00:32:39.970 "ffdhe6144", 00:32:39.970 "ffdhe8192" 00:32:39.970 ], 00:32:39.970 "dhchap_digests": [ 00:32:39.970 "sha256", 00:32:39.970 "sha384", 00:32:39.970 "sha512" 00:32:39.970 ], 00:32:39.970 "disable_auto_failback": false, 00:32:39.970 "fast_io_fail_timeout_sec": 0, 00:32:39.970 "generate_uuids": false, 00:32:39.970 "high_priority_weight": 0, 00:32:39.970 "io_path_stat": false, 00:32:39.970 "io_queue_requests": 0, 00:32:39.970 "keep_alive_timeout_ms": 10000, 00:32:39.970 "low_priority_weight": 0, 00:32:39.970 "medium_priority_weight": 0, 00:32:39.970 "nvme_adminq_poll_period_us": 10000, 00:32:39.970 "nvme_error_stat": false, 00:32:39.970 "nvme_ioq_poll_period_us": 0, 00:32:39.970 "rdma_cm_event_timeout_ms": 0, 00:32:39.970 "rdma_max_cq_size": 0, 00:32:39.970 "rdma_srq_size": 0, 00:32:39.970 "reconnect_delay_sec": 0, 00:32:39.970 "timeout_admin_us": 0, 00:32:39.970 "timeout_us": 0, 00:32:39.970 "transport_ack_timeout": 0, 00:32:39.970 "transport_retry_count": 4, 00:32:39.970 "transport_tos": 0 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "bdev_nvme_set_hotplug", 00:32:39.970 "params": { 00:32:39.970 "enable": false, 00:32:39.970 "period_us": 100000 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "bdev_malloc_create", 00:32:39.970 "params": { 00:32:39.970 "block_size": 4096, 00:32:39.970 "name": "malloc0", 00:32:39.970 "num_blocks": 8192, 00:32:39.970 "optimal_io_boundary": 0, 00:32:39.970 "physical_block_size": 4096, 00:32:39.970 "uuid": "306bdcc9-2f12-4e77-8aed-29381810ca4b" 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "bdev_wait_for_examine" 00:32:39.970 } 00:32:39.970 ] 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "subsystem": "nbd", 00:32:39.970 "config": [] 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "subsystem": "scheduler", 00:32:39.970 "config": [ 00:32:39.970 { 00:32:39.970 "method": "framework_set_scheduler", 00:32:39.970 "params": { 00:32:39.970 "name": "static" 00:32:39.970 } 00:32:39.970 } 00:32:39.970 ] 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "subsystem": "nvmf", 00:32:39.970 "config": [ 00:32:39.970 { 00:32:39.970 "method": "nvmf_set_config", 00:32:39.970 "params": { 00:32:39.970 "admin_cmd_passthru": { 00:32:39.970 "identify_ctrlr": false 00:32:39.970 }, 00:32:39.970 "discovery_filter": "match_any" 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "nvmf_set_max_subsystems", 00:32:39.970 "params": { 00:32:39.970 "max_subsystems": 1024 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "nvmf_set_crdt", 00:32:39.970 "params": { 00:32:39.970 "crdt1": 0, 00:32:39.970 "crdt2": 0, 00:32:39.970 "crdt3": 0 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "nvmf_create_transport", 00:32:39.970 "params": { 00:32:39.970 "abort_timeout_sec": 1, 00:32:39.970 "ack_timeout": 0, 00:32:39.970 "buf_cache_size": 4294967295, 00:32:39.970 "c2h_success": false, 00:32:39.970 "data_wr_pool_size": 0, 00:32:39.970 "dif_insert_or_strip": false, 00:32:39.970 "in_capsule_data_size": 4096, 00:32:39.970 "io_unit_size": 131072, 00:32:39.970 "max_aq_depth": 128, 00:32:39.970 "max_io_qpairs_per_ctrlr": 127, 00:32:39.970 "max_io_size": 131072, 00:32:39.970 "max_queue_depth": 128, 00:32:39.970 "num_shared_buffers": 511, 00:32:39.970 "sock_priority": 0, 00:32:39.970 "trtype": "TCP", 00:32:39.970 "zcopy": false 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "nvmf_create_subsystem", 00:32:39.970 "params": { 00:32:39.970 "allow_any_host": false, 00:32:39.970 "ana_reporting": false, 00:32:39.970 "max_cntlid": 65519, 00:32:39.970 "max_namespaces": 32, 00:32:39.970 "min_cntlid": 1, 00:32:39.970 "model_number": "SPDK bdev Controller", 00:32:39.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.970 "serial_number": "00000000000000000000" 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "nvmf_subsystem_add_host", 00:32:39.970 "params": { 00:32:39.970 "host": "nqn.2016-06.io.spdk:host1", 00:32:39.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.970 "psk": "key0" 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "nvmf_subsystem_add_ns", 00:32:39.970 "params": { 00:32:39.970 "namespace": { 00:32:39.970 "bdev_name": "malloc0", 00:32:39.970 "nguid": "306BDCC92F124E778AED29381810CA4B", 00:32:39.970 "no_auto_visible": false, 00:32:39.970 "nsid": 1, 00:32:39.970 "uuid": "306bdcc9-2f12-4e77-8aed-29381810ca4b" 00:32:39.970 }, 00:32:39.970 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:32:39.970 } 00:32:39.970 }, 00:32:39.970 { 00:32:39.970 "method": "nvmf_subsystem_add_listener", 00:32:39.970 "params": { 00:32:39.970 "listen_address": { 00:32:39.970 "adrfam": "IPv4", 00:32:39.970 "traddr": "10.0.0.2", 00:32:39.970 "trsvcid": "4420", 00:32:39.970 "trtype": "TCP" 00:32:39.970 }, 00:32:39.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.970 "secure_channel": true 00:32:39.970 } 00:32:39.970 } 00:32:39.970 ] 00:32:39.970 } 00:32:39.970 ] 00:32:39.970 }' 00:32:39.970 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:32:40.534 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:32:40.534 "subsystems": [ 00:32:40.534 { 00:32:40.534 "subsystem": "keyring", 00:32:40.534 "config": [ 00:32:40.534 { 00:32:40.534 "method": "keyring_file_add_key", 00:32:40.534 "params": { 00:32:40.534 "name": "key0", 00:32:40.534 "path": "/tmp/tmp.JuGrLOIA2m" 00:32:40.534 } 00:32:40.534 } 00:32:40.534 ] 00:32:40.534 }, 00:32:40.534 { 00:32:40.534 "subsystem": "iobuf", 00:32:40.535 "config": [ 00:32:40.535 { 00:32:40.535 "method": "iobuf_set_options", 00:32:40.535 "params": { 00:32:40.535 "large_bufsize": 135168, 00:32:40.535 "large_pool_count": 1024, 00:32:40.535 "small_bufsize": 8192, 00:32:40.535 "small_pool_count": 8192 00:32:40.535 } 00:32:40.535 } 00:32:40.535 ] 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "subsystem": "sock", 00:32:40.535 "config": [ 00:32:40.535 { 00:32:40.535 "method": "sock_set_default_impl", 00:32:40.535 "params": { 00:32:40.535 "impl_name": "posix" 00:32:40.535 } 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "method": "sock_impl_set_options", 00:32:40.535 "params": { 00:32:40.535 "enable_ktls": false, 00:32:40.535 "enable_placement_id": 0, 00:32:40.535 "enable_quickack": false, 00:32:40.535 "enable_recv_pipe": true, 00:32:40.535 "enable_zerocopy_send_client": false, 00:32:40.535 "enable_zerocopy_send_server": true, 00:32:40.535 "impl_name": "ssl", 00:32:40.535 "recv_buf_size": 4096, 00:32:40.535 "send_buf_size": 4096, 00:32:40.535 "tls_version": 0, 00:32:40.535 "zerocopy_threshold": 0 00:32:40.535 } 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "method": "sock_impl_set_options", 00:32:40.535 "params": { 00:32:40.535 "enable_ktls": false, 00:32:40.535 "enable_placement_id": 0, 00:32:40.535 "enable_quickack": false, 00:32:40.535 "enable_recv_pipe": true, 00:32:40.535 "enable_zerocopy_send_client": false, 00:32:40.535 "enable_zerocopy_send_server": true, 00:32:40.535 "impl_name": "posix", 00:32:40.535 "recv_buf_size": 2097152, 00:32:40.535 "send_buf_size": 2097152, 00:32:40.535 "tls_version": 0, 00:32:40.535 "zerocopy_threshold": 0 00:32:40.535 } 00:32:40.535 } 00:32:40.535 ] 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "subsystem": "vmd", 00:32:40.535 "config": [] 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "subsystem": "accel", 00:32:40.535 "config": [ 00:32:40.535 { 00:32:40.535 "method": "accel_set_options", 00:32:40.535 "params": { 00:32:40.535 "buf_count": 2048, 00:32:40.535 "large_cache_size": 16, 00:32:40.535 "sequence_count": 2048, 00:32:40.535 "small_cache_size": 128, 00:32:40.535 "task_count": 2048 00:32:40.535 } 00:32:40.535 } 00:32:40.535 ] 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "subsystem": "bdev", 00:32:40.535 "config": [ 00:32:40.535 { 00:32:40.535 "method": "bdev_set_options", 00:32:40.535 "params": { 00:32:40.535 "bdev_auto_examine": true, 00:32:40.535 "bdev_io_cache_size": 256, 00:32:40.535 "bdev_io_pool_size": 65535, 00:32:40.535 "iobuf_large_cache_size": 16, 00:32:40.535 "iobuf_small_cache_size": 128 00:32:40.535 } 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "method": "bdev_raid_set_options", 00:32:40.535 "params": { 00:32:40.535 "process_window_size_kb": 1024 00:32:40.535 } 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "method": "bdev_iscsi_set_options", 00:32:40.535 "params": { 00:32:40.535 "timeout_sec": 30 00:32:40.535 } 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "method": "bdev_nvme_set_options", 00:32:40.535 "params": { 00:32:40.535 "action_on_timeout": "none", 00:32:40.535 "allow_accel_sequence": false, 00:32:40.535 "arbitration_burst": 0, 00:32:40.535 "bdev_retry_count": 3, 00:32:40.535 "ctrlr_loss_timeout_sec": 0, 00:32:40.535 "delay_cmd_submit": true, 00:32:40.535 "dhchap_dhgroups": [ 00:32:40.535 "null", 00:32:40.535 "ffdhe2048", 00:32:40.535 "ffdhe3072", 00:32:40.535 "ffdhe4096", 00:32:40.535 "ffdhe6144", 00:32:40.535 "ffdhe8192" 00:32:40.535 ], 00:32:40.535 "dhchap_digests": [ 00:32:40.535 "sha256", 00:32:40.535 "sha384", 00:32:40.535 "sha512" 00:32:40.535 ], 00:32:40.535 "disable_auto_failback": false, 00:32:40.535 "fast_io_fail_timeout_sec": 0, 00:32:40.535 "generate_uuids": false, 00:32:40.535 "high_priority_weight": 0, 00:32:40.535 "io_path_stat": false, 00:32:40.535 "io_queue_requests": 512, 00:32:40.535 "keep_alive_timeout_ms": 10000, 00:32:40.535 "low_priority_weight": 0, 00:32:40.535 "medium_priority_weight": 0, 00:32:40.535 "nvme_adminq_poll_period_us": 10000, 00:32:40.535 "nvme_error_stat": false, 00:32:40.535 "nvme_ioq_poll_period_us": 0, 00:32:40.535 "rdma_cm_event_timeout_ms": 0, 00:32:40.535 "rdma_max_cq_size": 0, 00:32:40.535 "rdma_srq_size": 0, 00:32:40.535 "reconnect_delay_sec": 0, 00:32:40.535 "timeout_admin_us": 0, 00:32:40.535 "timeout_us": 0, 00:32:40.535 "transport_ack_timeout": 0, 00:32:40.535 "transport_retry_count": 4, 00:32:40.535 "transport_tos": 0 00:32:40.535 } 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "method": "bdev_nvme_attach_controller", 00:32:40.535 "params": { 00:32:40.535 "adrfam": "IPv4", 00:32:40.535 "ctrlr_loss_timeout_sec": 0, 00:32:40.535 "ddgst": false, 00:32:40.535 "fast_io_fail_timeout_sec": 0, 00:32:40.535 "hdgst": false, 00:32:40.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:40.535 "name": "nvme0", 00:32:40.535 "prchk_guard": false, 00:32:40.535 "prchk_reftag": false, 00:32:40.535 "psk": "key0", 00:32:40.535 "reconnect_delay_sec": 0, 00:32:40.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.535 "traddr": "10.0.0.2", 00:32:40.535 "trsvcid": "4420", 00:32:40.535 "trtype": "TCP" 00:32:40.535 } 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "method": "bdev_nvme_set_hotplug", 00:32:40.535 "params": { 00:32:40.535 "enable": false, 00:32:40.535 "period_us": 100000 00:32:40.535 } 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "method": "bdev_enable_histogram", 00:32:40.535 "params": { 00:32:40.535 "enable": true, 00:32:40.535 "name": "nvme0n1" 00:32:40.535 } 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "method": "bdev_wait_for_examine" 00:32:40.535 } 00:32:40.535 ] 00:32:40.535 }, 00:32:40.535 { 00:32:40.535 "subsystem": "nbd", 00:32:40.535 "config": [] 00:32:40.535 } 00:32:40.535 ] 00:32:40.535 }' 00:32:40.535 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@266 -- # killprocess 116968 00:32:40.535 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116968 ']' 00:32:40.535 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116968 00:32:40.535 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:40.535 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:40.535 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116968 00:32:40.535 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:40.535 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:40.535 killing process with pid 116968 00:32:40.535 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116968' 00:32:40.535 Received shutdown signal, test time was about 1.000000 seconds 00:32:40.535 00:32:40.535 Latency(us) 00:32:40.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.536 =================================================================================================================== 00:32:40.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116968 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116968 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@267 -- # killprocess 116922 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 116922 ']' 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 116922 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116922 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:40.536 killing process with pid 116922 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116922' 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 116922 00:32:40.536 13:16:52 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 116922 00:32:40.793 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:32:40.793 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:32:40.793 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:40.793 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:40.793 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:32:40.793 "subsystems": [ 00:32:40.793 { 00:32:40.793 "subsystem": "keyring", 00:32:40.793 "config": [ 00:32:40.793 { 00:32:40.793 "method": "keyring_file_add_key", 00:32:40.793 "params": { 00:32:40.793 "name": "key0", 00:32:40.793 "path": "/tmp/tmp.JuGrLOIA2m" 00:32:40.793 } 00:32:40.793 } 00:32:40.793 ] 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "subsystem": "iobuf", 00:32:40.793 "config": [ 00:32:40.793 { 00:32:40.793 "method": "iobuf_set_options", 00:32:40.793 "params": { 00:32:40.793 "large_bufsize": 135168, 00:32:40.793 "large_pool_count": 1024, 00:32:40.793 "small_bufsize": 8192, 00:32:40.793 "small_pool_count": 8192 00:32:40.793 } 00:32:40.793 } 00:32:40.793 ] 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "subsystem": "sock", 00:32:40.793 "config": [ 00:32:40.793 { 00:32:40.793 "method": "sock_set_default_impl", 00:32:40.793 "params": { 00:32:40.793 "impl_name": "posix" 00:32:40.793 } 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "method": "sock_impl_set_options", 00:32:40.793 "params": { 00:32:40.793 "enable_ktls": false, 00:32:40.793 "enable_placement_id": 0, 00:32:40.793 "enable_quickack": false, 00:32:40.793 "enable_recv_pipe": true, 00:32:40.793 "enable_zerocopy_send_client": false, 00:32:40.793 "enable_zerocopy_send_server": true, 00:32:40.793 "impl_name": "ssl", 00:32:40.793 "recv_buf_size": 4096, 00:32:40.793 "send_buf_size": 4096, 00:32:40.793 "tls_version": 0, 00:32:40.793 "zerocopy_threshold": 0 00:32:40.793 } 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "method": "sock_impl_set_options", 00:32:40.793 "params": { 00:32:40.793 "enable_ktls": false, 00:32:40.793 "enable_placement_id": 0, 00:32:40.793 "enable_quickack": false, 00:32:40.793 "enable_recv_pipe": true, 00:32:40.793 "enable_zerocopy_send_client": false, 00:32:40.793 "enable_zerocopy_send_server": true, 00:32:40.793 "impl_name": "posix", 00:32:40.793 "recv_buf_size": 2097152, 00:32:40.793 "send_buf_size": 2097152, 00:32:40.793 "tls_version": 0, 00:32:40.793 "zerocopy_threshold": 0 00:32:40.793 } 00:32:40.793 } 00:32:40.793 ] 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "subsystem": "vmd", 00:32:40.793 "config": [] 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "subsystem": "accel", 00:32:40.793 "config": [ 00:32:40.793 { 00:32:40.793 "method": "accel_set_options", 00:32:40.793 "params": { 00:32:40.793 "buf_count": 2048, 00:32:40.793 "large_cache_size": 16, 00:32:40.793 "sequence_count": 2048, 00:32:40.793 "small_cache_size": 128, 00:32:40.793 "task_count": 2048 00:32:40.793 } 00:32:40.793 } 00:32:40.793 ] 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "subsystem": "bdev", 00:32:40.793 "config": [ 00:32:40.793 { 00:32:40.793 "method": "bdev_set_options", 00:32:40.793 "params": { 00:32:40.793 "bdev_auto_examine": true, 00:32:40.793 "bdev_io_cache_size": 256, 00:32:40.793 "bdev_io_pool_size": 65535, 00:32:40.793 "iobuf_large_cache_size": 16, 00:32:40.793 "iobuf_small_cache_size": 128 00:32:40.793 } 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "method": "bdev_raid_set_options", 00:32:40.793 "params": { 00:32:40.793 "process_window_size_kb": 1024 00:32:40.793 } 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "method": "bdev_iscsi_set_options", 00:32:40.793 "params": { 00:32:40.793 "timeout_sec": 30 00:32:40.793 } 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "method": "bdev_nvme_set_options", 00:32:40.793 "params": { 00:32:40.793 "action_on_timeout": "none", 00:32:40.793 "allow_accel_sequence": false, 00:32:40.793 "arbitration_burst": 0, 00:32:40.793 "bdev_retry_count": 3, 00:32:40.793 "ctrlr_loss_timeout_sec": 0, 00:32:40.793 "delay_cmd_submit": true, 00:32:40.793 "dhchap_dhgroups": [ 00:32:40.793 "null", 00:32:40.793 "ffdhe2048", 00:32:40.793 "ffdhe3072", 00:32:40.793 "ffdhe4096", 00:32:40.793 "ffdhe6144", 00:32:40.793 "ffdhe8192" 00:32:40.793 ], 00:32:40.793 "dhchap_digests": [ 00:32:40.793 "sha256", 00:32:40.793 "sha384", 00:32:40.793 "sha512" 00:32:40.793 ], 00:32:40.793 "disable_auto_failback": false, 00:32:40.793 "fast_io_fail_timeout_sec": 0, 00:32:40.793 "generate_uuids": false, 00:32:40.793 "high_priority_weight": 0, 00:32:40.793 "io_path_stat": false, 00:32:40.793 "io_queue_requests": 0, 00:32:40.793 "keep_alive_timeout_ms": 10000, 00:32:40.793 "low_priority_weight": 0, 00:32:40.793 "medium_priority_weight": 0, 00:32:40.793 "nvme_adminq_poll_period_us": 10000, 00:32:40.793 "nvme_error_stat": false, 00:32:40.793 "nvme_ioq_poll_period_us": 0, 00:32:40.793 "rdma_cm_event_timeout_ms": 0, 00:32:40.793 "rdma_max_cq_size": 0, 00:32:40.793 "rdma_srq_size": 0, 00:32:40.793 "reconnect_delay_sec": 0, 00:32:40.793 "timeout_admin_us": 0, 00:32:40.793 "timeout_us": 0, 00:32:40.793 "transport_ack_timeout": 0, 00:32:40.793 "transport_retry_count": 4, 00:32:40.793 "transport_tos": 0 00:32:40.793 } 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "method": "bdev_nvme_set_hotplug", 00:32:40.793 "params": { 00:32:40.793 "enable": false, 00:32:40.793 "period_us": 100000 00:32:40.793 } 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "method": "bdev_malloc_create", 00:32:40.793 "params": { 00:32:40.793 "block_size": 4096, 00:32:40.793 "name": "malloc0", 00:32:40.793 "num_blocks": 8192, 00:32:40.793 "optimal_io_boundary": 0, 00:32:40.793 "physical_block_size": 4096, 00:32:40.793 "uuid": "306bdcc9-2f12-4e77-8aed-29381810ca4b" 00:32:40.793 } 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "method": "bdev_wait_for_examine" 00:32:40.793 } 00:32:40.793 ] 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "subsystem": "nbd", 00:32:40.793 "config": [] 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "subsystem": "scheduler", 00:32:40.793 "config": [ 00:32:40.793 { 00:32:40.793 "method": "framework_set_scheduler", 00:32:40.793 "params": { 00:32:40.793 "name": "static" 00:32:40.793 } 00:32:40.793 } 00:32:40.793 ] 00:32:40.793 }, 00:32:40.793 { 00:32:40.793 "subsystem": "nvmf", 00:32:40.793 "config": [ 00:32:40.793 { 00:32:40.793 "method": "nvmf_set_config", 00:32:40.793 "params": { 00:32:40.793 "admin_cmd_passthru": { 00:32:40.793 "identify_ctrlr": false 00:32:40.793 }, 00:32:40.794 "discovery_filter": "match_any" 00:32:40.794 } 00:32:40.794 }, 00:32:40.794 { 00:32:40.794 "method": "nvmf_set_max_subsystems", 00:32:40.794 "params": { 00:32:40.794 "max_subsystems": 1024 00:32:40.794 } 00:32:40.794 }, 00:32:40.794 { 00:32:40.794 "method": "nvmf_set_crdt", 00:32:40.794 "params": { 00:32:40.794 "crdt1": 0, 00:32:40.794 "crdt2": 0, 00:32:40.794 "crdt3": 0 00:32:40.794 } 00:32:40.794 }, 00:32:40.794 { 00:32:40.794 "method": "nvmf_create_transport", 00:32:40.794 "params": { 00:32:40.794 "abort_timeout_sec": 1, 00:32:40.794 "ack_timeout": 0, 00:32:40.794 "buf_cache_size": 4294967295, 00:32:40.794 "c2h_success": false, 00:32:40.794 "data_wr_pool_size": 0, 00:32:40.794 "dif_insert_or_strip": false, 00:32:40.794 "in_capsule_data_size": 4096, 00:32:40.794 "io_unit_size": 131072, 00:32:40.794 "max_aq_depth": 128, 00:32:40.794 "max_io_qpairs_per_ctrlr": 127, 00:32:40.794 "max_io_size": 131072, 00:32:40.794 "max_queue_depth": 128, 00:32:40.794 "num_shared_buffers": 511, 00:32:40.794 "sock_priority": 0, 00:32:40.794 "trtype": "TCP", 00:32:40.794 "zcopy": false 00:32:40.794 } 00:32:40.794 }, 00:32:40.794 { 00:32:40.794 "method": "nvmf_create_subsystem", 00:32:40.794 "params": { 00:32:40.794 "allow_any_host": false, 00:32:40.794 "ana_reporting": false, 00:32:40.794 "max_cntlid": 65519, 00:32:40.794 "max_namespaces": 32, 00:32:40.794 "min_cntlid": 1, 00:32:40.794 "model_number": "SPDK bdev Controller", 00:32:40.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.794 "serial_number": "00000000000000000000" 00:32:40.794 } 00:32:40.794 }, 00:32:40.794 { 00:32:40.794 "method": "nvmf_subsystem_add_host", 00:32:40.794 "params": { 00:32:40.794 "host": "nqn.2016-06.io.spdk:host1", 00:32:40.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.794 "psk": "key0" 00:32:40.794 } 00:32:40.794 }, 00:32:40.794 { 00:32:40.794 "method": "nvmf_subsystem_add_ns", 00:32:40.794 "params": { 00:32:40.794 "namespace": { 00:32:40.794 "bdev_name": "malloc0", 00:32:40.794 "nguid": "306BDCC92F124E778AED29381810CA4B", 00:32:40.794 "no_auto_visible": false, 00:32:40.794 "nsid": 1, 00:32:40.794 "uuid": "306bdcc9-2f12-4e77-8aed-29381810ca4b" 00:32:40.794 }, 00:32:40.794 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:32:40.794 } 00:32:40.794 }, 00:32:40.794 { 00:32:40.794 "method": "nvmf_subsystem_add_listener", 00:32:40.794 "params": { 00:32:40.794 "listen_address": { 00:32:40.794 "adrfam": "IPv4", 00:32:40.794 "traddr": "10.0.0.2", 00:32:40.794 "trsvcid": "4420", 00:32:40.794 "trtype": "TCP" 00:32:40.794 }, 00:32:40.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.794 "secure_channel": true 00:32:40.794 } 00:32:40.794 } 00:32:40.794 ] 00:32:40.794 } 00:32:40.794 ] 00:32:40.794 }' 00:32:40.794 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=117059 00:32:40.794 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -c /dev/fd/62 00:32:40.794 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 117059 00:32:40.794 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 117059 ']' 00:32:40.794 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.794 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:40.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.794 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.794 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:40.794 13:16:53 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:40.794 [2024-07-15 13:16:53.141113] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:40.794 [2024-07-15 13:16:53.142291] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:40.794 [2024-07-15 13:16:53.142381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.050 [2024-07-15 13:16:53.274382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.050 [2024-07-15 13:16:53.333840] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.050 [2024-07-15 13:16:53.333914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.050 [2024-07-15 13:16:53.333928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.050 [2024-07-15 13:16:53.333939] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.050 [2024-07-15 13:16:53.333946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.050 [2024-07-15 13:16:53.334048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.050 [2024-07-15 13:16:53.335226] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:41.050 [2024-07-15 13:16:53.488634] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:41.307 [2024-07-15 13:16:53.532148] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.307 [2024-07-15 13:16:53.564055] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:41.307 [2024-07-15 13:16:53.564297] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:41.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=117102 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 117102 /var/tmp/bdevperf.sock 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 117102 ']' 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:41.870 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:32:41.871 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:41.871 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:41.871 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:32:41.871 "subsystems": [ 00:32:41.871 { 00:32:41.871 "subsystem": "keyring", 00:32:41.871 "config": [ 00:32:41.871 { 00:32:41.871 "method": "keyring_file_add_key", 00:32:41.871 "params": { 00:32:41.871 "name": "key0", 00:32:41.871 "path": "/tmp/tmp.JuGrLOIA2m" 00:32:41.871 } 00:32:41.871 } 00:32:41.871 ] 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "subsystem": "iobuf", 00:32:41.871 "config": [ 00:32:41.871 { 00:32:41.871 "method": "iobuf_set_options", 00:32:41.871 "params": { 00:32:41.871 "large_bufsize": 135168, 00:32:41.871 "large_pool_count": 1024, 00:32:41.871 "small_bufsize": 8192, 00:32:41.871 "small_pool_count": 8192 00:32:41.871 } 00:32:41.871 } 00:32:41.871 ] 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "subsystem": "sock", 00:32:41.871 "config": [ 00:32:41.871 { 00:32:41.871 "method": "sock_set_default_impl", 00:32:41.871 "params": { 00:32:41.871 "impl_name": "posix" 00:32:41.871 } 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "method": "sock_impl_set_options", 00:32:41.871 "params": { 00:32:41.871 "enable_ktls": false, 00:32:41.871 "enable_placement_id": 0, 00:32:41.871 "enable_quickack": false, 00:32:41.871 "enable_recv_pipe": true, 00:32:41.871 "enable_zerocopy_send_client": false, 00:32:41.871 "enable_zerocopy_send_server": true, 00:32:41.871 "impl_name": "ssl", 00:32:41.871 "recv_buf_size": 4096, 00:32:41.871 "send_buf_size": 4096, 00:32:41.871 "tls_version": 0, 00:32:41.871 "zerocopy_threshold": 0 00:32:41.871 } 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "method": "sock_impl_set_options", 00:32:41.871 "params": { 00:32:41.871 "enable_ktls": false, 00:32:41.871 "enable_placement_id": 0, 00:32:41.871 "enable_quickack": false, 00:32:41.871 "enable_recv_pipe": true, 00:32:41.871 "enable_zerocopy_send_client": false, 00:32:41.871 "enable_zerocopy_send_server": true, 00:32:41.871 "impl_name": "posix", 00:32:41.871 "recv_buf_size": 2097152, 00:32:41.871 "send_buf_size": 2097152, 00:32:41.871 "tls_version": 0, 00:32:41.871 "zerocopy_threshold": 0 00:32:41.871 } 00:32:41.871 } 00:32:41.871 ] 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "subsystem": "vmd", 00:32:41.871 "config": [] 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "subsystem": "accel", 00:32:41.871 "config": [ 00:32:41.871 { 00:32:41.871 "method": "accel_set_options", 00:32:41.871 "params": { 00:32:41.871 "buf_count": 2048, 00:32:41.871 "large_cache_size": 16, 00:32:41.871 "sequence_count": 2048, 00:32:41.871 "small_cache_size": 128, 00:32:41.871 "task_count": 2048 00:32:41.871 } 00:32:41.871 } 00:32:41.871 ] 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "subsystem": "bdev", 00:32:41.871 "config": [ 00:32:41.871 { 00:32:41.871 "method": "bdev_set_options", 00:32:41.871 "params": { 00:32:41.871 "bdev_auto_examine": true, 00:32:41.871 "bdev_io_cache_size": 256, 00:32:41.871 "bdev_io_pool_size": 65535, 00:32:41.871 "iobuf_large_cache_size": 16, 00:32:41.871 "iobuf_small_cache_size": 128 00:32:41.871 } 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "method": "bdev_raid_set_options", 00:32:41.871 "params": { 00:32:41.871 "process_window_size_kb": 1024 00:32:41.871 } 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "method": "bdev_iscsi_set_options", 00:32:41.871 "params": { 00:32:41.871 "timeout_sec": 30 00:32:41.871 } 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "method": "bdev_nvme_set_options", 00:32:41.871 "params": { 00:32:41.871 "action_on_timeout": "none", 00:32:41.871 "allow_accel_sequence": false, 00:32:41.871 "arbitration_burst": 0, 00:32:41.871 "bdev_retry_count": 3, 00:32:41.871 "ctrlr_loss_timeout_sec": 0, 00:32:41.871 "delay_cmd_submit": true, 00:32:41.871 "dhchap_dhgroups": [ 00:32:41.871 "null", 00:32:41.871 "ffdhe2048", 00:32:41.871 "ffdhe3072", 00:32:41.871 "ffdhe4096", 00:32:41.871 "ffdhe6144", 00:32:41.871 "ffdhe8192" 00:32:41.871 ], 00:32:41.871 "dhchap_digests": [ 00:32:41.871 "sha256", 00:32:41.871 "sha384", 00:32:41.871 "sha512" 00:32:41.871 ], 00:32:41.871 "disable_auto_failback": false, 00:32:41.871 "fast_io_fail_timeout_sec": 0, 00:32:41.871 "generate_uuids": false, 00:32:41.871 "high_priority_weight": 0, 00:32:41.871 "io_path_stat": false, 00:32:41.871 "io_queue_requests": 512, 00:32:41.871 "keep_alive_timeout_ms": 10000, 00:32:41.871 "low_priority_weight": 0, 00:32:41.871 "medium_priority_weight": 0, 00:32:41.871 "nvme_adminq_poll_period_us": 10000, 00:32:41.871 "nvme_error_stat": false, 00:32:41.871 "nvme_ioq_poll_period_us": 0, 00:32:41.871 "rdma_cm_event_timeout_ms": 0, 00:32:41.871 "rdma_max_cq_size": 0, 00:32:41.871 "rdma_srq_size": 0, 00:32:41.871 "reconnect_delay_sec": 0, 00:32:41.871 "timeout_admin_us": 0, 00:32:41.871 "timeout_us": 0, 00:32:41.871 "transport_ack_timeout": 0, 00:32:41.871 "transport_retry_count": 4, 00:32:41.871 "transport_tos": 0 00:32:41.871 } 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "method": "bdev_nvme_attach_controller", 00:32:41.871 "params": { 00:32:41.871 "adrfam": "IPv4", 00:32:41.871 "ctrlr_loss_timeout_sec": 0, 00:32:41.871 "ddgst": false, 00:32:41.871 "fast_io_fail_timeout_sec": 0, 00:32:41.871 "hdgst": false, 00:32:41.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.871 "name": "nvme0", 00:32:41.871 "prchk_guard": false, 00:32:41.871 "prchk_reftag": false, 00:32:41.871 "psk": "key0", 00:32:41.871 "reconnect_delay_sec": 0, 00:32:41.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.871 "traddr": "10.0.0.2", 00:32:41.871 "trsvcid": "4420", 00:32:41.871 "trtype": "TCP" 00:32:41.871 } 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "method": "bdev_nvme_set_hotplug", 00:32:41.871 "params": { 00:32:41.871 "enable": false, 00:32:41.871 "period_us": 100000 00:32:41.871 } 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "method": "bdev_enable_histogram", 00:32:41.871 "params": { 00:32:41.871 "enable": true, 00:32:41.871 "name": "nvme0n1" 00:32:41.871 } 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "method": "bdev_wait_for_examine" 00:32:41.871 } 00:32:41.871 ] 00:32:41.871 }, 00:32:41.871 { 00:32:41.871 "subsystem": "nbd", 00:32:41.871 "config": [] 00:32:41.871 } 00:32:41.871 ] 00:32:41.871 }' 00:32:41.871 13:16:54 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:41.871 [2024-07-15 13:16:54.256242] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:41.871 [2024-07-15 13:16:54.256382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117102 ] 00:32:42.128 [2024-07-15 13:16:54.397431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.128 [2024-07-15 13:16:54.483933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.385 [2024-07-15 13:16:54.630541] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:42.951 13:16:55 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:42.951 13:16:55 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:42.951 13:16:55 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:42.951 13:16:55 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:32:43.211 13:16:55 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.211 13:16:55 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:43.467 Running I/O for 1 seconds... 00:32:44.398 00:32:44.398 Latency(us) 00:32:44.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.398 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:44.398 Verification LBA range: start 0x0 length 0x2000 00:32:44.398 nvme0n1 : 1.02 3432.43 13.41 0.00 0.00 36865.50 6583.39 30384.87 00:32:44.398 =================================================================================================================== 00:32:44.398 Total : 3432.43 13.41 0.00 0.00 36865.50 6583.39 30384.87 00:32:44.398 0 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:44.398 nvmf_trace.0 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@16 -- # killprocess 117102 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 117102 ']' 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 117102 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117102 00:32:44.398 killing process with pid 117102 00:32:44.398 Received shutdown signal, test time was about 1.000000 seconds 00:32:44.398 00:32:44.398 Latency(us) 00:32:44.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.398 =================================================================================================================== 00:32:44.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117102' 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 117102 00:32:44.398 13:16:56 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 117102 00:32:44.656 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:32:44.656 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@492 -- # nvmfcleanup 00:32:44.656 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:44.913 rmmod nvme_tcp 00:32:44.913 rmmod nvme_fabrics 00:32:44.913 rmmod nvme_keyring 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@493 -- # '[' -n 117059 ']' 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@494 -- # killprocess 117059 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 117059 ']' 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 117059 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117059 00:32:44.913 killing process with pid 117059 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117059' 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@967 -- # kill 117059 00:32:44.913 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@972 -- # wait 117059 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@282 -- # remove_spdk_ns 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.OTqFNLemYn /tmp/tmp.Pi8QA3XAok /tmp/tmp.JuGrLOIA2m 00:32:45.172 ************************************ 00:32:45.172 END TEST nvmf_tls 00:32:45.172 ************************************ 00:32:45.172 00:32:45.172 real 1m25.728s 00:32:45.172 user 1m56.739s 00:32:45.172 sys 0m32.426s 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@66 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:45.172 ************************************ 00:32:45.172 START TEST nvmf_fips 00:32:45.172 ************************************ 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:32:45.172 * Looking for test storage... 00:32:45.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.172 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:32:45.173 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:32:45.431 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@37 -- # cat 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@127 -- # : 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:32:45.432 Error setting digest 00:32:45.432 00D26568607F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:32:45.432 00D26568607F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@452 -- # prepare_net_devs 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@414 -- # local -g is_hw=no 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@416 -- # remove_spdk_ns 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@436 -- # nvmf_veth_init 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:32:45.432 Cannot find device "nvmf_tgt_br" 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@159 -- # true 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:32:45.432 Cannot find device "nvmf_tgt_br2" 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@160 -- # true 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:32:45.432 Cannot find device "nvmf_tgt_br" 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@162 -- # true 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:32:45.432 Cannot find device "nvmf_tgt_br2" 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@163 -- # true 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:45.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@166 -- # true 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:45.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@167 -- # true 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:45.432 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:32:45.690 13:16:57 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:32:45.690 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:32:45.690 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:45.690 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:45.690 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:45.690 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:45.690 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:32:45.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:32:45.690 00:32:45.690 --- 10.0.0.2 ping statistics --- 00:32:45.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.690 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:32:45.690 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:32:45.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:45.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:32:45.691 00:32:45.691 --- 10.0.0.3 ping statistics --- 00:32:45.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.691 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:45.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:32:45.691 00:32:45.691 --- 10.0.0.1 ping statistics --- 00:32:45.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.691 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@437 -- # return 0 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@485 -- # nvmfpid=117378 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@486 -- # waitforlisten 117378 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 117378 ']' 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:45.691 13:16:58 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:45.949 [2024-07-15 13:16:58.162393] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.949 [2024-07-15 13:16:58.190506] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:45.949 [2024-07-15 13:16:58.190800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.949 [2024-07-15 13:16:58.327645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.949 [2024-07-15 13:16:58.389406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.949 [2024-07-15 13:16:58.389472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.949 [2024-07-15 13:16:58.389501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.949 [2024-07-15 13:16:58.389517] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.949 [2024-07-15 13:16:58.389527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.949 [2024-07-15 13:16:58.389577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.205 [2024-07-15 13:16:58.440400] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:46.205 [2024-07-15 13:16:58.440821] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:32:46.779 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:47.037 [2024-07-15 13:16:59.458430] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.037 [2024-07-15 13:16:59.478337] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:47.037 [2024-07-15 13:16:59.478628] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.295 [2024-07-15 13:16:59.514236] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:47.295 malloc0 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=117436 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 117436 /var/tmp/bdevperf.sock 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 117436 ']' 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:47.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:47.295 13:16:59 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:47.295 [2024-07-15 13:16:59.657210] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:32:47.295 [2024-07-15 13:16:59.657348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117436 ] 00:32:47.553 [2024-07-15 13:16:59.819871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.553 [2024-07-15 13:16:59.906434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:48.518 13:17:00 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:48.518 13:17:00 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:32:48.518 13:17:00 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:32:48.518 [2024-07-15 13:17:00.947761] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:48.518 [2024-07-15 13:17:00.947958] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:32:48.789 TLSTESTn1 00:32:48.789 13:17:01 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:48.789 Running I/O for 10 seconds... 00:32:58.752 00:32:58.752 Latency(us) 00:32:58.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.752 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:58.752 Verification LBA range: start 0x0 length 0x2000 00:32:58.752 TLSTESTn1 : 10.03 3261.81 12.74 0.00 0.00 39146.64 7566.43 43134.60 00:32:58.752 =================================================================================================================== 00:32:58.752 Total : 3261.81 12.74 0.00 0.00 39146.64 7566.43 43134.60 00:32:58.752 0 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:32:58.752 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:58.752 nvmf_trace.0 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@16 -- # killprocess 117436 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 117436 ']' 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 117436 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117436 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:59.010 killing process with pid 117436 00:32:59.010 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117436' 00:32:59.011 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@967 -- # kill 117436 00:32:59.011 Received shutdown signal, test time was about 10.000000 seconds 00:32:59.011 00:32:59.011 Latency(us) 00:32:59.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.011 =================================================================================================================== 00:32:59.011 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:59.011 [2024-07-15 13:17:11.306377] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:59.011 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@972 -- # wait 117436 00:32:59.011 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:32:59.011 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@492 -- # nvmfcleanup 00:32:59.011 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.269 rmmod nvme_tcp 00:32:59.269 rmmod nvme_fabrics 00:32:59.269 rmmod nvme_keyring 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@493 -- # '[' -n 117378 ']' 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@494 -- # killprocess 117378 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 117378 ']' 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 117378 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117378 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:59.269 killing process with pid 117378 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117378' 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@967 -- # kill 117378 00:32:59.269 [2024-07-15 13:17:11.588849] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:59.269 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@972 -- # wait 117378 00:32:59.527 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@282 -- # remove_spdk_ns 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:32:59.528 00:32:59.528 real 0m14.331s 00:32:59.528 user 0m17.419s 00:32:59.528 sys 0m6.366s 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:59.528 ************************************ 00:32:59.528 END TEST nvmf_fips 00:32:59.528 ************************************ 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@69 -- # '[' 0 -eq 1 ']' 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@75 -- # [[ virt == phy ]] 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@90 -- # timing_exit target 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@92 -- # timing_enter host 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@94 -- # [[ 0 -eq 0 ]] 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@95 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.528 ************************************ 00:32:59.528 START TEST nvmf_multicontroller 00:32:59.528 ************************************ 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:59.528 * Looking for test storage... 00:32:59.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.528 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.787 13:17:11 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@452 -- # prepare_net_devs 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@414 -- # local -g is_hw=no 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@416 -- # remove_spdk_ns 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@436 -- # nvmf_veth_init 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:32:59.787 Cannot find device "nvmf_tgt_br" 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:32:59.787 Cannot find device "nvmf_tgt_br2" 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@160 -- # true 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:32:59.787 Cannot find device "nvmf_tgt_br" 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:32:59.787 Cannot find device "nvmf_tgt_br2" 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:59.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:32:59.787 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:59.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:32:59.788 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:33:00.045 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:00.045 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:00.045 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:00.045 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:33:00.045 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:33:00.045 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:33:00.045 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:33:00.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:33:00.046 00:33:00.046 --- 10.0.0.2 ping statistics --- 00:33:00.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.046 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:33:00.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:00.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:33:00.046 00:33:00.046 --- 10.0.0.3 ping statistics --- 00:33:00.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.046 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:00.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:33:00.046 00:33:00.046 --- 10.0.0.1 ping statistics --- 00:33:00.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.046 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@437 -- # return 0 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@485 -- # nvmfpid=117795 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@486 -- # waitforlisten 117795 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 117795 ']' 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:00.046 13:17:12 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:00.046 [2024-07-15 13:17:12.463860] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:00.046 [2024-07-15 13:17:12.465612] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:00.046 [2024-07-15 13:17:12.465712] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.304 [2024-07-15 13:17:12.607863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:00.304 [2024-07-15 13:17:12.666886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.304 [2024-07-15 13:17:12.666952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.304 [2024-07-15 13:17:12.666964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.304 [2024-07-15 13:17:12.666972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.304 [2024-07-15 13:17:12.666979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.304 [2024-07-15 13:17:12.667471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.304 [2024-07-15 13:17:12.667698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.304 [2024-07-15 13:17:12.667707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.304 [2024-07-15 13:17:12.714656] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:00.304 [2024-07-15 13:17:12.714750] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:00.304 [2024-07-15 13:17:12.715081] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:00.304 [2024-07-15 13:17:12.715499] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 [2024-07-15 13:17:13.560601] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 Malloc0 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 [2024-07-15 13:17:13.620830] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 [2024-07-15 13:17:13.628812] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 Malloc1 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=117847 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 117847 /var/tmp/bdevperf.sock 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 117847 ']' 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:01.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:01.238 13:17:13 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.805 NVMe0n1 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.805 1 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:33:01.805 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.806 2024/07/15 13:17:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:33:01.806 request: 00:33:01.806 { 00:33:01.806 "method": "bdev_nvme_attach_controller", 00:33:01.806 "params": { 00:33:01.806 "name": "NVMe0", 00:33:01.806 "trtype": "tcp", 00:33:01.806 "traddr": "10.0.0.2", 00:33:01.806 "adrfam": "ipv4", 00:33:01.806 "trsvcid": "4420", 00:33:01.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.806 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:33:01.806 "hostaddr": "10.0.0.2", 00:33:01.806 "hostsvcid": "60000", 00:33:01.806 "prchk_reftag": false, 00:33:01.806 "prchk_guard": false, 00:33:01.806 "hdgst": false, 00:33:01.806 "ddgst": false 00:33:01.806 } 00:33:01.806 } 00:33:01.806 Got JSON-RPC error response 00:33:01.806 GoRPCClient: error on JSON-RPC call 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.806 2024/07/15 13:17:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:33:01.806 request: 00:33:01.806 { 00:33:01.806 "method": "bdev_nvme_attach_controller", 00:33:01.806 "params": { 00:33:01.806 "name": "NVMe0", 00:33:01.806 "trtype": "tcp", 00:33:01.806 "traddr": "10.0.0.2", 00:33:01.806 "adrfam": "ipv4", 00:33:01.806 "trsvcid": "4420", 00:33:01.806 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:01.806 "hostaddr": "10.0.0.2", 00:33:01.806 "hostsvcid": "60000", 00:33:01.806 "prchk_reftag": false, 00:33:01.806 "prchk_guard": false, 00:33:01.806 "hdgst": false, 00:33:01.806 "ddgst": false 00:33:01.806 } 00:33:01.806 } 00:33:01.806 Got JSON-RPC error response 00:33:01.806 GoRPCClient: error on JSON-RPC call 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.806 2024/07/15 13:17:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:33:01.806 request: 00:33:01.806 { 00:33:01.806 "method": "bdev_nvme_attach_controller", 00:33:01.806 "params": { 00:33:01.806 "name": "NVMe0", 00:33:01.806 "trtype": "tcp", 00:33:01.806 "traddr": "10.0.0.2", 00:33:01.806 "adrfam": "ipv4", 00:33:01.806 "trsvcid": "4420", 00:33:01.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.806 "hostaddr": "10.0.0.2", 00:33:01.806 "hostsvcid": "60000", 00:33:01.806 "prchk_reftag": false, 00:33:01.806 "prchk_guard": false, 00:33:01.806 "hdgst": false, 00:33:01.806 "ddgst": false, 00:33:01.806 "multipath": "disable" 00:33:01.806 } 00:33:01.806 } 00:33:01.806 Got JSON-RPC error response 00:33:01.806 GoRPCClient: error on JSON-RPC call 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.806 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.806 2024/07/15 13:17:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:33:01.806 request: 00:33:01.806 { 00:33:01.806 "method": "bdev_nvme_attach_controller", 00:33:01.806 "params": { 00:33:01.806 "name": "NVMe0", 00:33:01.806 "trtype": "tcp", 00:33:01.806 "traddr": "10.0.0.2", 00:33:01.806 "adrfam": "ipv4", 00:33:01.806 "trsvcid": "4420", 00:33:01.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.806 "hostaddr": "10.0.0.2", 00:33:01.806 "hostsvcid": "60000", 00:33:01.806 "prchk_reftag": false, 00:33:01.806 "prchk_guard": false, 00:33:01.806 "hdgst": false, 00:33:01.806 "ddgst": false, 00:33:01.806 "multipath": "failover" 00:33:01.806 } 00:33:01.806 } 00:33:01.806 Got JSON-RPC error response 00:33:01.807 GoRPCClient: error on JSON-RPC call 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.807 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.807 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:02.064 00:33:02.064 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.064 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:02.064 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.064 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:33:02.064 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:02.064 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.064 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:33:02.064 13:17:14 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:03.454 0 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 117847 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 117847 ']' 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 117847 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117847 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:03.454 killing process with pid 117847 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117847' 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 117847 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 117847 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:33:03.454 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:33:03.454 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:33:03.454 [2024-07-15 13:17:13.738561] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:03.454 [2024-07-15 13:17:13.738724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117847 ] 00:33:03.454 [2024-07-15 13:17:13.873460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.454 [2024-07-15 13:17:13.933278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.454 [2024-07-15 13:17:14.308615] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name c8e48439-08ba-4f95-8c24-c9233e224407 already exists 00:33:03.454 [2024-07-15 13:17:14.308698] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:c8e48439-08ba-4f95-8c24-c9233e224407 alias for bdev NVMe1n1 00:33:03.454 [2024-07-15 13:17:14.308718] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:33:03.454 Running I/O for 1 seconds... 00:33:03.454 00:33:03.454 Latency(us) 00:33:03.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.454 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:33:03.455 NVMe0n1 : 1.01 18909.84 73.87 0.00 0.00 6758.63 2219.29 14477.50 00:33:03.455 =================================================================================================================== 00:33:03.455 Total : 18909.84 73.87 0.00 0.00 6758.63 2219.29 14477.50 00:33:03.455 Received shutdown signal, test time was about 1.000000 seconds 00:33:03.455 00:33:03.455 Latency(us) 00:33:03.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.455 =================================================================================================================== 00:33:03.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:03.455 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@492 -- # nvmfcleanup 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:03.455 rmmod nvme_tcp 00:33:03.455 rmmod nvme_fabrics 00:33:03.455 rmmod nvme_keyring 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@493 -- # '[' -n 117795 ']' 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@494 -- # killprocess 117795 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 117795 ']' 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 117795 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117795 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:03.455 killing process with pid 117795 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117795' 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 117795 00:33:03.455 13:17:15 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 117795 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@282 -- # remove_spdk_ns 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:33:03.713 00:33:03.713 real 0m4.247s 00:33:03.713 user 0m8.711s 00:33:03.713 sys 0m1.756s 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.713 13:17:16 nvmf_tcp_interrupt_mode.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:03.713 ************************************ 00:33:03.713 END TEST nvmf_multicontroller 00:33:03.713 ************************************ 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@96 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:03.971 ************************************ 00:33:03.971 START TEST nvmf_aer 00:33:03.971 ************************************ 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:33:03.971 * Looking for test storage... 00:33:03.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.971 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@452 -- # prepare_net_devs 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@414 -- # local -g is_hw=no 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@416 -- # remove_spdk_ns 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@436 -- # nvmf_veth_init 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:33:03.972 Cannot find device "nvmf_tgt_br" 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@159 -- # true 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:33:03.972 Cannot find device "nvmf_tgt_br2" 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@160 -- # true 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:33:03.972 Cannot find device "nvmf_tgt_br" 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@162 -- # true 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:33:03.972 Cannot find device "nvmf_tgt_br2" 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@163 -- # true 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:03.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@166 -- # true 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:03.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@167 -- # true 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:33:03.972 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:33:04.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:04.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:33:04.231 00:33:04.231 --- 10.0.0.2 ping statistics --- 00:33:04.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.231 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:33:04.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:04.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:33:04.231 00:33:04.231 --- 10.0.0.3 ping statistics --- 00:33:04.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.231 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:04.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:04.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:33:04.231 00:33:04.231 --- 10.0.0.1 ping statistics --- 00:33:04.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.231 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:33:04.231 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@437 -- # return 0 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@485 -- # nvmfpid=118081 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@486 -- # waitforlisten 118081 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 118081 ']' 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:04.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:04.232 13:17:16 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:04.490 [2024-07-15 13:17:16.706686] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:04.490 [2024-07-15 13:17:16.707981] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:04.490 [2024-07-15 13:17:16.708053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.490 [2024-07-15 13:17:16.849341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:04.490 [2024-07-15 13:17:16.919516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.490 [2024-07-15 13:17:16.919982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.490 [2024-07-15 13:17:16.920320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.490 [2024-07-15 13:17:16.920596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.490 [2024-07-15 13:17:16.920851] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.490 [2024-07-15 13:17:16.921157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.490 [2024-07-15 13:17:16.921263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:04.490 [2024-07-15 13:17:16.921541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:04.490 [2024-07-15 13:17:16.921595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.748 [2024-07-15 13:17:16.987126] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:04.748 [2024-07-15 13:17:16.987498] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:04.748 [2024-07-15 13:17:16.987878] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:04.748 [2024-07-15 13:17:16.988238] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:04.748 [2024-07-15 13:17:16.988861] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:05.314 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:05.314 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:33:05.314 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:33:05.314 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:05.314 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.572 [2024-07-15 13:17:17.826928] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.572 Malloc0 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.572 [2024-07-15 13:17:17.879195] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.572 [ 00:33:05.572 { 00:33:05.572 "allow_any_host": true, 00:33:05.572 "hosts": [], 00:33:05.572 "listen_addresses": [], 00:33:05.572 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:05.572 "subtype": "Discovery" 00:33:05.572 }, 00:33:05.572 { 00:33:05.572 "allow_any_host": true, 00:33:05.572 "hosts": [], 00:33:05.572 "listen_addresses": [ 00:33:05.572 { 00:33:05.572 "adrfam": "IPv4", 00:33:05.572 "traddr": "10.0.0.2", 00:33:05.572 "trsvcid": "4420", 00:33:05.572 "trtype": "TCP" 00:33:05.572 } 00:33:05.572 ], 00:33:05.572 "max_cntlid": 65519, 00:33:05.572 "max_namespaces": 2, 00:33:05.572 "min_cntlid": 1, 00:33:05.572 "model_number": "SPDK bdev Controller", 00:33:05.572 "namespaces": [ 00:33:05.572 { 00:33:05.572 "bdev_name": "Malloc0", 00:33:05.572 "name": "Malloc0", 00:33:05.572 "nguid": "50460955665D4805828AADFFFFD2E9B3", 00:33:05.572 "nsid": 1, 00:33:05.572 "uuid": "50460955-665d-4805-828a-adffffd2e9b3" 00:33:05.572 } 00:33:05.572 ], 00:33:05.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:05.572 "serial_number": "SPDK00000000000001", 00:33:05.572 "subtype": "NVMe" 00:33:05.572 } 00:33:05.572 ] 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@33 -- # aerpid=118134 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:33:05.572 13:17:17 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:33:05.572 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:05.572 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:33:05.572 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:33:05.572 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.831 Malloc1 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.831 [ 00:33:05.831 { 00:33:05.831 "allow_any_host": true, 00:33:05.831 "hosts": [], 00:33:05.831 "listen_addresses": [], 00:33:05.831 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:05.831 "subtype": "Discovery" 00:33:05.831 }, 00:33:05.831 { 00:33:05.831 "allow_any_host": true, 00:33:05.831 "hosts": [], 00:33:05.831 "listen_addresses": [ 00:33:05.831 { 00:33:05.831 "adrfam": "IPv4", 00:33:05.831 "traddr": "10.0.0.2", 00:33:05.831 "trsvcid": "4420", 00:33:05.831 "trtype": "TCP" 00:33:05.831 } 00:33:05.831 ], 00:33:05.831 "max_cntlid": 65519, 00:33:05.831 "max_namespaces": 2, 00:33:05.831 "min_cntlid": 1, 00:33:05.831 "model_number": "SPDK bdev Controller", 00:33:05.831 "namespaces": [ 00:33:05.831 { 00:33:05.831 "bdev_name": "Malloc0", 00:33:05.831 "name": "Malloc0", 00:33:05.831 "nguid": "50460955665D4805828AADFFFFD2E9B3", 00:33:05.831 "nsid": 1, 00:33:05.831 "uuid": "50460955-665d-4805-828a-adffffd2e9b3" 00:33:05.831 }, 00:33:05.831 { 00:33:05.831 "bdev_name": "Malloc1", 00:33:05.831 "name": "Malloc1", 00:33:05.831 "nguid": "36860ED04E084AA19BC8FA0A877D94B7", 00:33:05.831 "nsid": 2, 00:33:05.831 "uuid": "36860ed0-4e08-4aa1-9bc8-fa0a877d94b7" 00:33:05.831 } 00:33:05.831 ], 00:33:05.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:05.831 "serial_number": "SPDK00000000000001", 00:33:05.831 "subtype": "NVMe" 00:33:05.831 } 00:33:05.831 ] 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@43 -- # wait 118134 00:33:05.831 Asynchronous Event Request test 00:33:05.831 Attaching to 10.0.0.2 00:33:05.831 Attached to 10.0.0.2 00:33:05.831 Registering asynchronous event callbacks... 00:33:05.831 Starting namespace attribute notice tests for all controllers... 00:33:05.831 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:33:05.831 aer_cb - Changed Namespace 00:33:05.831 Cleaning up... 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@492 -- # nvmfcleanup 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.831 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.831 rmmod nvme_tcp 00:33:06.089 rmmod nvme_fabrics 00:33:06.089 rmmod nvme_keyring 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@493 -- # '[' -n 118081 ']' 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@494 -- # killprocess 118081 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 118081 ']' 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 118081 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118081 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:06.089 killing process with pid 118081 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118081' 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@967 -- # kill 118081 00:33:06.089 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@972 -- # wait 118081 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@282 -- # remove_spdk_ns 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:33:06.347 00:33:06.347 real 0m2.405s 00:33:06.347 user 0m2.228s 00:33:06.347 sys 0m0.698s 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:06.347 ************************************ 00:33:06.347 END TEST nvmf_aer 00:33:06.347 ************************************ 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@97 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:06.347 ************************************ 00:33:06.347 START TEST nvmf_async_init 00:33:06.347 ************************************ 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:33:06.347 * Looking for test storage... 00:33:06.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.347 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dcf9f74820614129bfbb376c9dc42401 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@452 -- # prepare_net_devs 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@414 -- # local -g is_hw=no 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@416 -- # remove_spdk_ns 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@436 -- # nvmf_veth_init 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:33:06.348 Cannot find device "nvmf_tgt_br" 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:33:06.348 Cannot find device "nvmf_tgt_br2" 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@160 -- # true 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:33:06.348 Cannot find device "nvmf_tgt_br" 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:33:06.348 Cannot find device "nvmf_tgt_br2" 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:33:06.348 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:06.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:06.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:33:06.606 13:17:18 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:06.606 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:06.606 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:06.606 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:33:06.606 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:33:06.606 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:33:06.606 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:06.606 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:33:06.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:33:06.864 00:33:06.864 --- 10.0.0.2 ping statistics --- 00:33:06.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.864 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:33:06.864 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:06.864 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:33:06.864 00:33:06.864 --- 10.0.0.3 ping statistics --- 00:33:06.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.864 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:06.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:33:06.864 00:33:06.864 --- 10.0.0.1 ping statistics --- 00:33:06.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.864 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@437 -- # return 0 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@485 -- # nvmfpid=118305 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@486 -- # waitforlisten 118305 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 118305 ']' 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:06.864 13:17:19 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:06.864 [2024-07-15 13:17:19.219148] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:06.864 [2024-07-15 13:17:19.220890] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:06.864 [2024-07-15 13:17:19.220991] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.121 [2024-07-15 13:17:19.362245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.121 [2024-07-15 13:17:19.449424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.121 [2024-07-15 13:17:19.449505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.121 [2024-07-15 13:17:19.449524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.121 [2024-07-15 13:17:19.449539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.121 [2024-07-15 13:17:19.449551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.121 [2024-07-15 13:17:19.449591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.121 [2024-07-15 13:17:19.500038] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:07.121 [2024-07-15 13:17:19.500383] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.078 [2024-07-15 13:17:20.266342] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.078 null0 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dcf9f74820614129bfbb376c9dc42401 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.078 [2024-07-15 13:17:20.306476] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.078 nvme0n1 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.078 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.348 [ 00:33:08.348 { 00:33:08.348 "aliases": [ 00:33:08.348 "dcf9f748-2061-4129-bfbb-376c9dc42401" 00:33:08.348 ], 00:33:08.348 "assigned_rate_limits": { 00:33:08.348 "r_mbytes_per_sec": 0, 00:33:08.348 "rw_ios_per_sec": 0, 00:33:08.348 "rw_mbytes_per_sec": 0, 00:33:08.348 "w_mbytes_per_sec": 0 00:33:08.348 }, 00:33:08.348 "block_size": 512, 00:33:08.348 "claimed": false, 00:33:08.348 "driver_specific": { 00:33:08.348 "mp_policy": "active_passive", 00:33:08.348 "nvme": [ 00:33:08.348 { 00:33:08.348 "ctrlr_data": { 00:33:08.348 "ana_reporting": false, 00:33:08.348 "cntlid": 1, 00:33:08.348 "firmware_revision": "24.09", 00:33:08.348 "model_number": "SPDK bdev Controller", 00:33:08.348 "multi_ctrlr": true, 00:33:08.348 "oacs": { 00:33:08.348 "firmware": 0, 00:33:08.348 "format": 0, 00:33:08.348 "ns_manage": 0, 00:33:08.348 "security": 0 00:33:08.348 }, 00:33:08.348 "serial_number": "00000000000000000000", 00:33:08.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.348 "vendor_id": "0x8086" 00:33:08.348 }, 00:33:08.348 "ns_data": { 00:33:08.348 "can_share": true, 00:33:08.348 "id": 1 00:33:08.348 }, 00:33:08.348 "trid": { 00:33:08.348 "adrfam": "IPv4", 00:33:08.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.348 "traddr": "10.0.0.2", 00:33:08.348 "trsvcid": "4420", 00:33:08.348 "trtype": "TCP" 00:33:08.348 }, 00:33:08.348 "vs": { 00:33:08.348 "nvme_version": "1.3" 00:33:08.348 } 00:33:08.348 } 00:33:08.348 ] 00:33:08.348 }, 00:33:08.348 "memory_domains": [ 00:33:08.348 { 00:33:08.348 "dma_device_id": "system", 00:33:08.348 "dma_device_type": 1 00:33:08.348 } 00:33:08.348 ], 00:33:08.348 "name": "nvme0n1", 00:33:08.348 "num_blocks": 2097152, 00:33:08.348 "product_name": "NVMe disk", 00:33:08.348 "supported_io_types": { 00:33:08.348 "abort": true, 00:33:08.348 "compare": true, 00:33:08.348 "compare_and_write": true, 00:33:08.348 "copy": true, 00:33:08.348 "flush": true, 00:33:08.348 "get_zone_info": false, 00:33:08.348 "nvme_admin": true, 00:33:08.348 "nvme_io": true, 00:33:08.349 "nvme_io_md": false, 00:33:08.349 "nvme_iov_md": false, 00:33:08.349 "read": true, 00:33:08.349 "reset": true, 00:33:08.349 "seek_data": false, 00:33:08.349 "seek_hole": false, 00:33:08.349 "unmap": false, 00:33:08.349 "write": true, 00:33:08.349 "write_zeroes": true, 00:33:08.349 "zcopy": false, 00:33:08.349 "zone_append": false, 00:33:08.349 "zone_management": false 00:33:08.349 }, 00:33:08.349 "uuid": "dcf9f748-2061-4129-bfbb-376c9dc42401", 00:33:08.349 "zoned": false 00:33:08.349 } 00:33:08.349 ] 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.349 [2024-07-15 13:17:20.562201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:08.349 [2024-07-15 13:17:20.562449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcf8b0 (9): Bad file descriptor 00:33:08.349 [2024-07-15 13:17:20.695010] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.349 [ 00:33:08.349 { 00:33:08.349 "aliases": [ 00:33:08.349 "dcf9f748-2061-4129-bfbb-376c9dc42401" 00:33:08.349 ], 00:33:08.349 "assigned_rate_limits": { 00:33:08.349 "r_mbytes_per_sec": 0, 00:33:08.349 "rw_ios_per_sec": 0, 00:33:08.349 "rw_mbytes_per_sec": 0, 00:33:08.349 "w_mbytes_per_sec": 0 00:33:08.349 }, 00:33:08.349 "block_size": 512, 00:33:08.349 "claimed": false, 00:33:08.349 "driver_specific": { 00:33:08.349 "mp_policy": "active_passive", 00:33:08.349 "nvme": [ 00:33:08.349 { 00:33:08.349 "ctrlr_data": { 00:33:08.349 "ana_reporting": false, 00:33:08.349 "cntlid": 2, 00:33:08.349 "firmware_revision": "24.09", 00:33:08.349 "model_number": "SPDK bdev Controller", 00:33:08.349 "multi_ctrlr": true, 00:33:08.349 "oacs": { 00:33:08.349 "firmware": 0, 00:33:08.349 "format": 0, 00:33:08.349 "ns_manage": 0, 00:33:08.349 "security": 0 00:33:08.349 }, 00:33:08.349 "serial_number": "00000000000000000000", 00:33:08.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.349 "vendor_id": "0x8086" 00:33:08.349 }, 00:33:08.349 "ns_data": { 00:33:08.349 "can_share": true, 00:33:08.349 "id": 1 00:33:08.349 }, 00:33:08.349 "trid": { 00:33:08.349 "adrfam": "IPv4", 00:33:08.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.349 "traddr": "10.0.0.2", 00:33:08.349 "trsvcid": "4420", 00:33:08.349 "trtype": "TCP" 00:33:08.349 }, 00:33:08.349 "vs": { 00:33:08.349 "nvme_version": "1.3" 00:33:08.349 } 00:33:08.349 } 00:33:08.349 ] 00:33:08.349 }, 00:33:08.349 "memory_domains": [ 00:33:08.349 { 00:33:08.349 "dma_device_id": "system", 00:33:08.349 "dma_device_type": 1 00:33:08.349 } 00:33:08.349 ], 00:33:08.349 "name": "nvme0n1", 00:33:08.349 "num_blocks": 2097152, 00:33:08.349 "product_name": "NVMe disk", 00:33:08.349 "supported_io_types": { 00:33:08.349 "abort": true, 00:33:08.349 "compare": true, 00:33:08.349 "compare_and_write": true, 00:33:08.349 "copy": true, 00:33:08.349 "flush": true, 00:33:08.349 "get_zone_info": false, 00:33:08.349 "nvme_admin": true, 00:33:08.349 "nvme_io": true, 00:33:08.349 "nvme_io_md": false, 00:33:08.349 "nvme_iov_md": false, 00:33:08.349 "read": true, 00:33:08.349 "reset": true, 00:33:08.349 "seek_data": false, 00:33:08.349 "seek_hole": false, 00:33:08.349 "unmap": false, 00:33:08.349 "write": true, 00:33:08.349 "write_zeroes": true, 00:33:08.349 "zcopy": false, 00:33:08.349 "zone_append": false, 00:33:08.349 "zone_management": false 00:33:08.349 }, 00:33:08.349 "uuid": "dcf9f748-2061-4129-bfbb-376c9dc42401", 00:33:08.349 "zoned": false 00:33:08.349 } 00:33:08.349 ] 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.u31wZrxNhp 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.u31wZrxNhp 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.349 [2024-07-15 13:17:20.758228] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:08.349 [2024-07-15 13:17:20.758432] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.u31wZrxNhp 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.349 [2024-07-15 13:17:20.766208] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.u31wZrxNhp 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.349 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.349 [2024-07-15 13:17:20.778211] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:08.349 [2024-07-15 13:17:20.778321] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:33:08.607 nvme0n1 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.607 [ 00:33:08.607 { 00:33:08.607 "aliases": [ 00:33:08.607 "dcf9f748-2061-4129-bfbb-376c9dc42401" 00:33:08.607 ], 00:33:08.607 "assigned_rate_limits": { 00:33:08.607 "r_mbytes_per_sec": 0, 00:33:08.607 "rw_ios_per_sec": 0, 00:33:08.607 "rw_mbytes_per_sec": 0, 00:33:08.607 "w_mbytes_per_sec": 0 00:33:08.607 }, 00:33:08.607 "block_size": 512, 00:33:08.607 "claimed": false, 00:33:08.607 "driver_specific": { 00:33:08.607 "mp_policy": "active_passive", 00:33:08.607 "nvme": [ 00:33:08.607 { 00:33:08.607 "ctrlr_data": { 00:33:08.607 "ana_reporting": false, 00:33:08.607 "cntlid": 3, 00:33:08.607 "firmware_revision": "24.09", 00:33:08.607 "model_number": "SPDK bdev Controller", 00:33:08.607 "multi_ctrlr": true, 00:33:08.607 "oacs": { 00:33:08.607 "firmware": 0, 00:33:08.607 "format": 0, 00:33:08.607 "ns_manage": 0, 00:33:08.607 "security": 0 00:33:08.607 }, 00:33:08.607 "serial_number": "00000000000000000000", 00:33:08.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.607 "vendor_id": "0x8086" 00:33:08.607 }, 00:33:08.607 "ns_data": { 00:33:08.607 "can_share": true, 00:33:08.607 "id": 1 00:33:08.607 }, 00:33:08.607 "trid": { 00:33:08.607 "adrfam": "IPv4", 00:33:08.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:08.607 "traddr": "10.0.0.2", 00:33:08.607 "trsvcid": "4421", 00:33:08.607 "trtype": "TCP" 00:33:08.607 }, 00:33:08.607 "vs": { 00:33:08.607 "nvme_version": "1.3" 00:33:08.607 } 00:33:08.607 } 00:33:08.607 ] 00:33:08.607 }, 00:33:08.607 "memory_domains": [ 00:33:08.607 { 00:33:08.607 "dma_device_id": "system", 00:33:08.607 "dma_device_type": 1 00:33:08.607 } 00:33:08.607 ], 00:33:08.607 "name": "nvme0n1", 00:33:08.607 "num_blocks": 2097152, 00:33:08.607 "product_name": "NVMe disk", 00:33:08.607 "supported_io_types": { 00:33:08.607 "abort": true, 00:33:08.607 "compare": true, 00:33:08.607 "compare_and_write": true, 00:33:08.607 "copy": true, 00:33:08.607 "flush": true, 00:33:08.607 "get_zone_info": false, 00:33:08.607 "nvme_admin": true, 00:33:08.607 "nvme_io": true, 00:33:08.607 "nvme_io_md": false, 00:33:08.607 "nvme_iov_md": false, 00:33:08.607 "read": true, 00:33:08.607 "reset": true, 00:33:08.607 "seek_data": false, 00:33:08.607 "seek_hole": false, 00:33:08.607 "unmap": false, 00:33:08.607 "write": true, 00:33:08.607 "write_zeroes": true, 00:33:08.607 "zcopy": false, 00:33:08.607 "zone_append": false, 00:33:08.607 "zone_management": false 00:33:08.607 }, 00:33:08.607 "uuid": "dcf9f748-2061-4129-bfbb-376c9dc42401", 00:33:08.607 "zoned": false 00:33:08.607 } 00:33:08.607 ] 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.607 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.u31wZrxNhp 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@492 -- # nvmfcleanup 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:08.608 rmmod nvme_tcp 00:33:08.608 rmmod nvme_fabrics 00:33:08.608 rmmod nvme_keyring 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@493 -- # '[' -n 118305 ']' 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@494 -- # killprocess 118305 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 118305 ']' 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 118305 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:08.608 13:17:20 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118305 00:33:08.608 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:08.608 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:08.608 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118305' 00:33:08.608 killing process with pid 118305 00:33:08.608 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 118305 00:33:08.608 [2024-07-15 13:17:21.008620] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:33:08.608 [2024-07-15 13:17:21.008685] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:08.608 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 118305 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@282 -- # remove_spdk_ns 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:33:08.866 00:33:08.866 real 0m2.563s 00:33:08.866 user 0m1.551s 00:33:08.866 sys 0m0.672s 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:08.866 ************************************ 00:33:08.866 END TEST nvmf_async_init 00:33:08.866 ************************************ 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@98 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:08.866 ************************************ 00:33:08.866 START TEST dma 00:33:08.866 ************************************ 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:33:08.866 * Looking for test storage... 00:33:08.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@7 -- # uname -s 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.866 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- paths/export.sh@5 -- # export PATH 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@51 -- # : 0 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- host/dma.sh@13 -- # exit 0 00:33:09.124 00:33:09.124 real 0m0.088s 00:33:09.124 user 0m0.045s 00:33:09.124 sys 0m0.047s 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.dma -- common/autotest_common.sh@10 -- # set +x 00:33:09.124 ************************************ 00:33:09.124 END TEST dma 00:33:09.124 ************************************ 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:09.124 ************************************ 00:33:09.124 START TEST nvmf_identify 00:33:09.124 ************************************ 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:33:09.124 * Looking for test storage... 00:33:09.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.124 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@452 -- # prepare_net_devs 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@414 -- # local -g is_hw=no 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@416 -- # remove_spdk_ns 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@436 -- # nvmf_veth_init 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:33:09.125 Cannot find device "nvmf_tgt_br" 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@159 -- # true 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:33:09.125 Cannot find device "nvmf_tgt_br2" 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@160 -- # true 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:33:09.125 Cannot find device "nvmf_tgt_br" 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@162 -- # true 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:33:09.125 Cannot find device "nvmf_tgt_br2" 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@163 -- # true 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:33:09.125 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:09.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@166 -- # true 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:09.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@167 -- # true 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:33:09.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:33:09.383 00:33:09.383 --- 10.0.0.2 ping statistics --- 00:33:09.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.383 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:33:09.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:09.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:33:09.383 00:33:09.383 --- 10.0.0.3 ping statistics --- 00:33:09.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.383 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:33:09.383 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:09.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:33:09.641 00:33:09.641 --- 10.0.0.1 ping statistics --- 00:33:09.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.641 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@437 -- # return 0 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=118567 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 118567 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 118567 ']' 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:09.641 13:17:21 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:09.641 [2024-07-15 13:17:21.962800] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:09.641 [2024-07-15 13:17:21.964506] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:09.641 [2024-07-15 13:17:21.964606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.899 [2024-07-15 13:17:22.111240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:09.899 [2024-07-15 13:17:22.170826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.899 [2024-07-15 13:17:22.170886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.899 [2024-07-15 13:17:22.170897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.899 [2024-07-15 13:17:22.170906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.899 [2024-07-15 13:17:22.170913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.899 [2024-07-15 13:17:22.171017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.899 [2024-07-15 13:17:22.172805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:09.899 [2024-07-15 13:17:22.172908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:09.899 [2024-07-15 13:17:22.172922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.899 [2024-07-15 13:17:22.229155] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:09.899 [2024-07-15 13:17:22.229402] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:09.899 [2024-07-15 13:17:22.229601] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:09.899 [2024-07-15 13:17:22.229816] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:09.899 [2024-07-15 13:17:22.229920] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:09.899 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:09.899 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:33:09.899 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:09.899 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.899 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:09.899 [2024-07-15 13:17:22.326151] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.899 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.899 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:33:09.899 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:09.899 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:10.156 Malloc0 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:10.156 [2024-07-15 13:17:22.414250] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.156 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:10.156 [ 00:33:10.156 { 00:33:10.156 "allow_any_host": true, 00:33:10.156 "hosts": [], 00:33:10.156 "listen_addresses": [ 00:33:10.156 { 00:33:10.156 "adrfam": "IPv4", 00:33:10.156 "traddr": "10.0.0.2", 00:33:10.156 "trsvcid": "4420", 00:33:10.156 "trtype": "TCP" 00:33:10.156 } 00:33:10.156 ], 00:33:10.156 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:10.156 "subtype": "Discovery" 00:33:10.156 }, 00:33:10.156 { 00:33:10.156 "allow_any_host": true, 00:33:10.156 "hosts": [], 00:33:10.156 "listen_addresses": [ 00:33:10.156 { 00:33:10.156 "adrfam": "IPv4", 00:33:10.156 "traddr": "10.0.0.2", 00:33:10.156 "trsvcid": "4420", 00:33:10.156 "trtype": "TCP" 00:33:10.156 } 00:33:10.156 ], 00:33:10.156 "max_cntlid": 65519, 00:33:10.156 "max_namespaces": 32, 00:33:10.156 "min_cntlid": 1, 00:33:10.156 "model_number": "SPDK bdev Controller", 00:33:10.156 "namespaces": [ 00:33:10.156 { 00:33:10.156 "bdev_name": "Malloc0", 00:33:10.156 "eui64": "ABCDEF0123456789", 00:33:10.156 "name": "Malloc0", 00:33:10.156 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:33:10.156 "nsid": 1, 00:33:10.156 "uuid": "8a28aa3b-ee56-49db-a9c7-d30cc9ba1b14" 00:33:10.157 } 00:33:10.157 ], 00:33:10.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:10.157 "serial_number": "SPDK00000000000001", 00:33:10.157 "subtype": "NVMe" 00:33:10.157 } 00:33:10.157 ] 00:33:10.157 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.157 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:33:10.157 [2024-07-15 13:17:22.467003] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:10.157 [2024-07-15 13:17:22.467082] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118606 ] 00:33:10.157 [2024-07-15 13:17:22.613362] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:33:10.157 [2024-07-15 13:17:22.613451] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:10.157 [2024-07-15 13:17:22.613459] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:10.157 [2024-07-15 13:17:22.613476] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:10.157 [2024-07-15 13:17:22.613485] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:10.157 [2024-07-15 13:17:22.613655] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:33:10.157 [2024-07-15 13:17:22.613731] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14d3a60 0 00:33:10.157 [2024-07-15 13:17:22.621819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:10.157 [2024-07-15 13:17:22.621864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:10.157 [2024-07-15 13:17:22.621876] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:10.157 [2024-07-15 13:17:22.621884] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:10.157 [2024-07-15 13:17:22.621938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.157 [2024-07-15 13:17:22.621947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.157 [2024-07-15 13:17:22.621952] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.157 [2024-07-15 13:17:22.621969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:10.157 [2024-07-15 13:17:22.622008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.416 [2024-07-15 13:17:22.629805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.416 [2024-07-15 13:17:22.629847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.416 [2024-07-15 13:17:22.629862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.629873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.416 [2024-07-15 13:17:22.629894] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:10.416 [2024-07-15 13:17:22.629911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:33:10.416 [2024-07-15 13:17:22.629922] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:33:10.416 [2024-07-15 13:17:22.629957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.629969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.629976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.416 [2024-07-15 13:17:22.629998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.416 [2024-07-15 13:17:22.630055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.416 [2024-07-15 13:17:22.630147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.416 [2024-07-15 13:17:22.630163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.416 [2024-07-15 13:17:22.630171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.416 [2024-07-15 13:17:22.630188] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:33:10.416 [2024-07-15 13:17:22.630202] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:33:10.416 [2024-07-15 13:17:22.630218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.416 [2024-07-15 13:17:22.630249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.416 [2024-07-15 13:17:22.630288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.416 [2024-07-15 13:17:22.630344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.416 [2024-07-15 13:17:22.630360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.416 [2024-07-15 13:17:22.630369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.416 [2024-07-15 13:17:22.630389] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:33:10.416 [2024-07-15 13:17:22.630404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:33:10.416 [2024-07-15 13:17:22.630417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.416 [2024-07-15 13:17:22.630439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.416 [2024-07-15 13:17:22.630477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.416 [2024-07-15 13:17:22.630530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.416 [2024-07-15 13:17:22.630546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.416 [2024-07-15 13:17:22.630554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.416 [2024-07-15 13:17:22.630573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:10.416 [2024-07-15 13:17:22.630593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.416 [2024-07-15 13:17:22.630621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.416 [2024-07-15 13:17:22.630659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.416 [2024-07-15 13:17:22.630707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.416 [2024-07-15 13:17:22.630722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.416 [2024-07-15 13:17:22.630729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.416 [2024-07-15 13:17:22.630746] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:33:10.416 [2024-07-15 13:17:22.630756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:33:10.416 [2024-07-15 13:17:22.630795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:10.416 [2024-07-15 13:17:22.630909] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:33:10.416 [2024-07-15 13:17:22.630921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:10.416 [2024-07-15 13:17:22.630937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.416 [2024-07-15 13:17:22.630947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.630954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.630967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.417 [2024-07-15 13:17:22.631009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.417 [2024-07-15 13:17:22.631081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.417 [2024-07-15 13:17:22.631098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.417 [2024-07-15 13:17:22.631106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.417 [2024-07-15 13:17:22.631126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:10.417 [2024-07-15 13:17:22.631145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.631172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.417 [2024-07-15 13:17:22.631204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.417 [2024-07-15 13:17:22.631260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.417 [2024-07-15 13:17:22.631274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.417 [2024-07-15 13:17:22.631281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.417 [2024-07-15 13:17:22.631297] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:10.417 [2024-07-15 13:17:22.631307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:33:10.417 [2024-07-15 13:17:22.631322] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:33:10.417 [2024-07-15 13:17:22.631343] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:33:10.417 [2024-07-15 13:17:22.631366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631375] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.631388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.417 [2024-07-15 13:17:22.631425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.417 [2024-07-15 13:17:22.631532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.417 [2024-07-15 13:17:22.631553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.417 [2024-07-15 13:17:22.631562] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631570] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14d3a60): datao=0, datal=4096, cccid=0 00:33:10.417 [2024-07-15 13:17:22.631579] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1516840) on tqpair(0x14d3a60): expected_datao=0, payload_size=4096 00:33:10.417 [2024-07-15 13:17:22.631588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631613] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631624] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.417 [2024-07-15 13:17:22.631651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.417 [2024-07-15 13:17:22.631657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.417 [2024-07-15 13:17:22.631677] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:33:10.417 [2024-07-15 13:17:22.631686] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:33:10.417 [2024-07-15 13:17:22.631694] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:33:10.417 [2024-07-15 13:17:22.631703] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:33:10.417 [2024-07-15 13:17:22.631711] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:33:10.417 [2024-07-15 13:17:22.631720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:33:10.417 [2024-07-15 13:17:22.631735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:33:10.417 [2024-07-15 13:17:22.631752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.631803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:10.417 [2024-07-15 13:17:22.631845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.417 [2024-07-15 13:17:22.631918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.417 [2024-07-15 13:17:22.631934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.417 [2024-07-15 13:17:22.631942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.417 [2024-07-15 13:17:22.631967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.631981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.631991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.417 [2024-07-15 13:17:22.632000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.632006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.632011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.632022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.417 [2024-07-15 13:17:22.632033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.632041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.632047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.632057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.417 [2024-07-15 13:17:22.632067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.632074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.632081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.632092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.417 [2024-07-15 13:17:22.632102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:33:10.417 [2024-07-15 13:17:22.632127] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:10.417 [2024-07-15 13:17:22.632144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.632152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.632165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.417 [2024-07-15 13:17:22.632199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516840, cid 0, qid 0 00:33:10.417 [2024-07-15 13:17:22.632212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15169c0, cid 1, qid 0 00:33:10.417 [2024-07-15 13:17:22.632221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516b40, cid 2, qid 0 00:33:10.417 [2024-07-15 13:17:22.632228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.417 [2024-07-15 13:17:22.632235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516e40, cid 4, qid 0 00:33:10.417 [2024-07-15 13:17:22.632309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.417 [2024-07-15 13:17:22.632334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.417 [2024-07-15 13:17:22.632343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.632351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516e40) on tqpair=0x14d3a60 00:33:10.417 [2024-07-15 13:17:22.632361] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:33:10.417 [2024-07-15 13:17:22.632379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:33:10.417 [2024-07-15 13:17:22.632401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.417 [2024-07-15 13:17:22.632412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14d3a60) 00:33:10.417 [2024-07-15 13:17:22.632425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.417 [2024-07-15 13:17:22.632464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516e40, cid 4, qid 0 00:33:10.417 [2024-07-15 13:17:22.632531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.418 [2024-07-15 13:17:22.632546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.418 [2024-07-15 13:17:22.632553] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.632561] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14d3a60): datao=0, datal=4096, cccid=4 00:33:10.418 [2024-07-15 13:17:22.632569] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1516e40) on tqpair(0x14d3a60): expected_datao=0, payload_size=4096 00:33:10.418 [2024-07-15 13:17:22.632579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.632592] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.632601] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.632618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.418 [2024-07-15 13:17:22.632630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.418 [2024-07-15 13:17:22.632636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.632643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516e40) on tqpair=0x14d3a60 00:33:10.418 [2024-07-15 13:17:22.632668] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:33:10.418 [2024-07-15 13:17:22.632739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.632757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14d3a60) 00:33:10.418 [2024-07-15 13:17:22.632792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.418 [2024-07-15 13:17:22.632809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.632818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.632825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14d3a60) 00:33:10.418 [2024-07-15 13:17:22.632836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.418 [2024-07-15 13:17:22.632879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516e40, cid 4, qid 0 00:33:10.418 [2024-07-15 13:17:22.632890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516fc0, cid 5, qid 0 00:33:10.418 [2024-07-15 13:17:22.633008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.418 [2024-07-15 13:17:22.633024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.418 [2024-07-15 13:17:22.633033] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.633040] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14d3a60): datao=0, datal=1024, cccid=4 00:33:10.418 [2024-07-15 13:17:22.633048] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1516e40) on tqpair(0x14d3a60): expected_datao=0, payload_size=1024 00:33:10.418 [2024-07-15 13:17:22.633056] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.633068] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.633075] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.633085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.418 [2024-07-15 13:17:22.633096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.418 [2024-07-15 13:17:22.633103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.633111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516fc0) on tqpair=0x14d3a60 00:33:10.418 [2024-07-15 13:17:22.677808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.418 [2024-07-15 13:17:22.677863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.418 [2024-07-15 13:17:22.677874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.677884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516e40) on tqpair=0x14d3a60 00:33:10.418 [2024-07-15 13:17:22.677931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.677942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14d3a60) 00:33:10.418 [2024-07-15 13:17:22.677966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.418 [2024-07-15 13:17:22.678027] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516e40, cid 4, qid 0 00:33:10.418 [2024-07-15 13:17:22.678165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.418 [2024-07-15 13:17:22.678181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.418 [2024-07-15 13:17:22.678189] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.678197] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14d3a60): datao=0, datal=3072, cccid=4 00:33:10.418 [2024-07-15 13:17:22.678206] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1516e40) on tqpair(0x14d3a60): expected_datao=0, payload_size=3072 00:33:10.418 [2024-07-15 13:17:22.678216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.678231] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.678241] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.678255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.418 [2024-07-15 13:17:22.678267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.418 [2024-07-15 13:17:22.678274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.678281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516e40) on tqpair=0x14d3a60 00:33:10.418 [2024-07-15 13:17:22.678303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.678313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14d3a60) 00:33:10.418 [2024-07-15 13:17:22.678327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.418 [2024-07-15 13:17:22.678376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516e40, cid 4, qid 0 00:33:10.418 [2024-07-15 13:17:22.678459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.418 [2024-07-15 13:17:22.678474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.418 [2024-07-15 13:17:22.678481] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.678487] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14d3a60): datao=0, datal=8, cccid=4 00:33:10.418 [2024-07-15 13:17:22.678495] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1516e40) on tqpair(0x14d3a60): expected_datao=0, payload_size=8 00:33:10.418 [2024-07-15 13:17:22.678503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.678516] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.678524] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.719923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.418 [2024-07-15 13:17:22.719975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.418 [2024-07-15 13:17:22.719983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.418 [2024-07-15 13:17:22.719989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516e40) on tqpair=0x14d3a60 00:33:10.418 ===================================================== 00:33:10.418 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:10.418 ===================================================== 00:33:10.418 Controller Capabilities/Features 00:33:10.418 ================================ 00:33:10.418 Vendor ID: 0000 00:33:10.418 Subsystem Vendor ID: 0000 00:33:10.418 Serial Number: .................... 00:33:10.418 Model Number: ........................................ 00:33:10.418 Firmware Version: 24.09 00:33:10.418 Recommended Arb Burst: 0 00:33:10.418 IEEE OUI Identifier: 00 00 00 00:33:10.418 Multi-path I/O 00:33:10.418 May have multiple subsystem ports: No 00:33:10.418 May have multiple controllers: No 00:33:10.418 Associated with SR-IOV VF: No 00:33:10.418 Max Data Transfer Size: 131072 00:33:10.418 Max Number of Namespaces: 0 00:33:10.418 Max Number of I/O Queues: 1024 00:33:10.418 NVMe Specification Version (VS): 1.3 00:33:10.418 NVMe Specification Version (Identify): 1.3 00:33:10.418 Maximum Queue Entries: 128 00:33:10.418 Contiguous Queues Required: Yes 00:33:10.418 Arbitration Mechanisms Supported 00:33:10.418 Weighted Round Robin: Not Supported 00:33:10.418 Vendor Specific: Not Supported 00:33:10.418 Reset Timeout: 15000 ms 00:33:10.418 Doorbell Stride: 4 bytes 00:33:10.418 NVM Subsystem Reset: Not Supported 00:33:10.418 Command Sets Supported 00:33:10.418 NVM Command Set: Supported 00:33:10.418 Boot Partition: Not Supported 00:33:10.418 Memory Page Size Minimum: 4096 bytes 00:33:10.418 Memory Page Size Maximum: 4096 bytes 00:33:10.418 Persistent Memory Region: Not Supported 00:33:10.418 Optional Asynchronous Events Supported 00:33:10.418 Namespace Attribute Notices: Not Supported 00:33:10.418 Firmware Activation Notices: Not Supported 00:33:10.418 ANA Change Notices: Not Supported 00:33:10.418 PLE Aggregate Log Change Notices: Not Supported 00:33:10.418 LBA Status Info Alert Notices: Not Supported 00:33:10.418 EGE Aggregate Log Change Notices: Not Supported 00:33:10.418 Normal NVM Subsystem Shutdown event: Not Supported 00:33:10.418 Zone Descriptor Change Notices: Not Supported 00:33:10.418 Discovery Log Change Notices: Supported 00:33:10.418 Controller Attributes 00:33:10.418 128-bit Host Identifier: Not Supported 00:33:10.418 Non-Operational Permissive Mode: Not Supported 00:33:10.418 NVM Sets: Not Supported 00:33:10.418 Read Recovery Levels: Not Supported 00:33:10.418 Endurance Groups: Not Supported 00:33:10.418 Predictable Latency Mode: Not Supported 00:33:10.418 Traffic Based Keep ALive: Not Supported 00:33:10.418 Namespace Granularity: Not Supported 00:33:10.418 SQ Associations: Not Supported 00:33:10.418 UUID List: Not Supported 00:33:10.418 Multi-Domain Subsystem: Not Supported 00:33:10.418 Fixed Capacity Management: Not Supported 00:33:10.418 Variable Capacity Management: Not Supported 00:33:10.418 Delete Endurance Group: Not Supported 00:33:10.418 Delete NVM Set: Not Supported 00:33:10.418 Extended LBA Formats Supported: Not Supported 00:33:10.418 Flexible Data Placement Supported: Not Supported 00:33:10.418 00:33:10.418 Controller Memory Buffer Support 00:33:10.418 ================================ 00:33:10.418 Supported: No 00:33:10.418 00:33:10.418 Persistent Memory Region Support 00:33:10.418 ================================ 00:33:10.418 Supported: No 00:33:10.418 00:33:10.418 Admin Command Set Attributes 00:33:10.418 ============================ 00:33:10.419 Security Send/Receive: Not Supported 00:33:10.419 Format NVM: Not Supported 00:33:10.419 Firmware Activate/Download: Not Supported 00:33:10.419 Namespace Management: Not Supported 00:33:10.419 Device Self-Test: Not Supported 00:33:10.419 Directives: Not Supported 00:33:10.419 NVMe-MI: Not Supported 00:33:10.419 Virtualization Management: Not Supported 00:33:10.419 Doorbell Buffer Config: Not Supported 00:33:10.419 Get LBA Status Capability: Not Supported 00:33:10.419 Command & Feature Lockdown Capability: Not Supported 00:33:10.419 Abort Command Limit: 1 00:33:10.419 Async Event Request Limit: 4 00:33:10.419 Number of Firmware Slots: N/A 00:33:10.419 Firmware Slot 1 Read-Only: N/A 00:33:10.419 Firmware Activation Without Reset: N/A 00:33:10.419 Multiple Update Detection Support: N/A 00:33:10.419 Firmware Update Granularity: No Information Provided 00:33:10.419 Per-Namespace SMART Log: No 00:33:10.419 Asymmetric Namespace Access Log Page: Not Supported 00:33:10.419 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:10.419 Command Effects Log Page: Not Supported 00:33:10.419 Get Log Page Extended Data: Supported 00:33:10.419 Telemetry Log Pages: Not Supported 00:33:10.419 Persistent Event Log Pages: Not Supported 00:33:10.419 Supported Log Pages Log Page: May Support 00:33:10.419 Commands Supported & Effects Log Page: Not Supported 00:33:10.419 Feature Identifiers & Effects Log Page:May Support 00:33:10.419 NVMe-MI Commands & Effects Log Page: May Support 00:33:10.419 Data Area 4 for Telemetry Log: Not Supported 00:33:10.419 Error Log Page Entries Supported: 128 00:33:10.419 Keep Alive: Not Supported 00:33:10.419 00:33:10.419 NVM Command Set Attributes 00:33:10.419 ========================== 00:33:10.419 Submission Queue Entry Size 00:33:10.419 Max: 1 00:33:10.419 Min: 1 00:33:10.419 Completion Queue Entry Size 00:33:10.419 Max: 1 00:33:10.419 Min: 1 00:33:10.419 Number of Namespaces: 0 00:33:10.419 Compare Command: Not Supported 00:33:10.419 Write Uncorrectable Command: Not Supported 00:33:10.419 Dataset Management Command: Not Supported 00:33:10.419 Write Zeroes Command: Not Supported 00:33:10.419 Set Features Save Field: Not Supported 00:33:10.419 Reservations: Not Supported 00:33:10.419 Timestamp: Not Supported 00:33:10.419 Copy: Not Supported 00:33:10.419 Volatile Write Cache: Not Present 00:33:10.419 Atomic Write Unit (Normal): 1 00:33:10.419 Atomic Write Unit (PFail): 1 00:33:10.419 Atomic Compare & Write Unit: 1 00:33:10.419 Fused Compare & Write: Supported 00:33:10.419 Scatter-Gather List 00:33:10.419 SGL Command Set: Supported 00:33:10.419 SGL Keyed: Supported 00:33:10.419 SGL Bit Bucket Descriptor: Not Supported 00:33:10.419 SGL Metadata Pointer: Not Supported 00:33:10.419 Oversized SGL: Not Supported 00:33:10.419 SGL Metadata Address: Not Supported 00:33:10.419 SGL Offset: Supported 00:33:10.419 Transport SGL Data Block: Not Supported 00:33:10.419 Replay Protected Memory Block: Not Supported 00:33:10.419 00:33:10.419 Firmware Slot Information 00:33:10.419 ========================= 00:33:10.419 Active slot: 0 00:33:10.419 00:33:10.419 00:33:10.419 Error Log 00:33:10.419 ========= 00:33:10.419 00:33:10.419 Active Namespaces 00:33:10.419 ================= 00:33:10.419 Discovery Log Page 00:33:10.419 ================== 00:33:10.419 Generation Counter: 2 00:33:10.419 Number of Records: 2 00:33:10.419 Record Format: 0 00:33:10.419 00:33:10.419 Discovery Log Entry 0 00:33:10.419 ---------------------- 00:33:10.419 Transport Type: 3 (TCP) 00:33:10.419 Address Family: 1 (IPv4) 00:33:10.419 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:10.419 Entry Flags: 00:33:10.419 Duplicate Returned Information: 1 00:33:10.419 Explicit Persistent Connection Support for Discovery: 1 00:33:10.419 Transport Requirements: 00:33:10.419 Secure Channel: Not Required 00:33:10.419 Port ID: 0 (0x0000) 00:33:10.419 Controller ID: 65535 (0xffff) 00:33:10.419 Admin Max SQ Size: 128 00:33:10.419 Transport Service Identifier: 4420 00:33:10.419 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:10.419 Transport Address: 10.0.0.2 00:33:10.419 Discovery Log Entry 1 00:33:10.419 ---------------------- 00:33:10.419 Transport Type: 3 (TCP) 00:33:10.419 Address Family: 1 (IPv4) 00:33:10.419 Subsystem Type: 2 (NVM Subsystem) 00:33:10.419 Entry Flags: 00:33:10.419 Duplicate Returned Information: 0 00:33:10.419 Explicit Persistent Connection Support for Discovery: 0 00:33:10.419 Transport Requirements: 00:33:10.419 Secure Channel: Not Required 00:33:10.419 Port ID: 0 (0x0000) 00:33:10.419 Controller ID: 65535 (0xffff) 00:33:10.419 Admin Max SQ Size: 128 00:33:10.419 Transport Service Identifier: 4420 00:33:10.419 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:33:10.419 Transport Address: 10.0.0.2 [2024-07-15 13:17:22.720153] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:33:10.419 [2024-07-15 13:17:22.720172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516840) on tqpair=0x14d3a60 00:33:10.419 [2024-07-15 13:17:22.720183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.419 [2024-07-15 13:17:22.720190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15169c0) on tqpair=0x14d3a60 00:33:10.419 [2024-07-15 13:17:22.720195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.419 [2024-07-15 13:17:22.720201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516b40) on tqpair=0x14d3a60 00:33:10.419 [2024-07-15 13:17:22.720207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.419 [2024-07-15 13:17:22.720212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.419 [2024-07-15 13:17:22.720218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.419 [2024-07-15 13:17:22.720233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.419 [2024-07-15 13:17:22.720257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.419 [2024-07-15 13:17:22.720289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.419 [2024-07-15 13:17:22.720374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.419 [2024-07-15 13:17:22.720382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.419 [2024-07-15 13:17:22.720387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.419 [2024-07-15 13:17:22.720401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.419 [2024-07-15 13:17:22.720418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.419 [2024-07-15 13:17:22.720446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.419 [2024-07-15 13:17:22.720571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.419 [2024-07-15 13:17:22.720588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.419 [2024-07-15 13:17:22.720594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.419 [2024-07-15 13:17:22.720604] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:33:10.419 [2024-07-15 13:17:22.720610] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:33:10.419 [2024-07-15 13:17:22.720623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.419 [2024-07-15 13:17:22.720641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.419 [2024-07-15 13:17:22.720663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.419 [2024-07-15 13:17:22.720711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.419 [2024-07-15 13:17:22.720719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.419 [2024-07-15 13:17:22.720723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.419 [2024-07-15 13:17:22.720740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.419 [2024-07-15 13:17:22.720758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.419 [2024-07-15 13:17:22.720815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.419 [2024-07-15 13:17:22.720861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.419 [2024-07-15 13:17:22.720869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.419 [2024-07-15 13:17:22.720873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.419 [2024-07-15 13:17:22.720878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.419 [2024-07-15 13:17:22.720890] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.720895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.720900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.420 [2024-07-15 13:17:22.720908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.420 [2024-07-15 13:17:22.720930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.420 [2024-07-15 13:17:22.720980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.420 [2024-07-15 13:17:22.720987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.420 [2024-07-15 13:17:22.720991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.720996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.420 [2024-07-15 13:17:22.721007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.420 [2024-07-15 13:17:22.721024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.420 [2024-07-15 13:17:22.721044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.420 [2024-07-15 13:17:22.721086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.420 [2024-07-15 13:17:22.721095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.420 [2024-07-15 13:17:22.721099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.420 [2024-07-15 13:17:22.721115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.420 [2024-07-15 13:17:22.721132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.420 [2024-07-15 13:17:22.721153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.420 [2024-07-15 13:17:22.721205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.420 [2024-07-15 13:17:22.721212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.420 [2024-07-15 13:17:22.721216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.420 [2024-07-15 13:17:22.721231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.420 [2024-07-15 13:17:22.721249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.420 [2024-07-15 13:17:22.721268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.420 [2024-07-15 13:17:22.721319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.420 [2024-07-15 13:17:22.721337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.420 [2024-07-15 13:17:22.721342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.420 [2024-07-15 13:17:22.721359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.420 [2024-07-15 13:17:22.721377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.420 [2024-07-15 13:17:22.721398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.420 [2024-07-15 13:17:22.721449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.420 [2024-07-15 13:17:22.721456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.420 [2024-07-15 13:17:22.721461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.420 [2024-07-15 13:17:22.721476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.420 [2024-07-15 13:17:22.721494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.420 [2024-07-15 13:17:22.721514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.420 [2024-07-15 13:17:22.721564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.420 [2024-07-15 13:17:22.721576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.420 [2024-07-15 13:17:22.721581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.420 [2024-07-15 13:17:22.721598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.420 [2024-07-15 13:17:22.721615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.420 [2024-07-15 13:17:22.721636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.420 [2024-07-15 13:17:22.721690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.420 [2024-07-15 13:17:22.721697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.420 [2024-07-15 13:17:22.721701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.420 [2024-07-15 13:17:22.721717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.721726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.420 [2024-07-15 13:17:22.721734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.420 [2024-07-15 13:17:22.721754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.420 [2024-07-15 13:17:22.725801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.420 [2024-07-15 13:17:22.725831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.420 [2024-07-15 13:17:22.725837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.725843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.420 [2024-07-15 13:17:22.725862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.725868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.725873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14d3a60) 00:33:10.420 [2024-07-15 13:17:22.725884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.420 [2024-07-15 13:17:22.725918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1516cc0, cid 3, qid 0 00:33:10.420 [2024-07-15 13:17:22.725989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.420 [2024-07-15 13:17:22.725996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.420 [2024-07-15 13:17:22.726001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.420 [2024-07-15 13:17:22.726005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1516cc0) on tqpair=0x14d3a60 00:33:10.420 [2024-07-15 13:17:22.726014] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:33:10.420 00:33:10.420 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:33:10.420 [2024-07-15 13:17:22.767237] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:10.420 [2024-07-15 13:17:22.767312] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118614 ] 00:33:10.682 [2024-07-15 13:17:22.915268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:33:10.682 [2024-07-15 13:17:22.915360] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:10.682 [2024-07-15 13:17:22.915368] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:10.682 [2024-07-15 13:17:22.915383] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:10.682 [2024-07-15 13:17:22.915391] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:10.682 [2024-07-15 13:17:22.915553] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:33:10.682 [2024-07-15 13:17:22.915631] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15f9a60 0 00:33:10.682 [2024-07-15 13:17:22.919787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:10.682 [2024-07-15 13:17:22.919812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:10.682 [2024-07-15 13:17:22.919819] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:10.682 [2024-07-15 13:17:22.919823] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:10.682 [2024-07-15 13:17:22.919871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.919879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.919883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.682 [2024-07-15 13:17:22.919899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:10.682 [2024-07-15 13:17:22.919934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.682 [2024-07-15 13:17:22.927811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.682 [2024-07-15 13:17:22.927844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.682 [2024-07-15 13:17:22.927851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.927857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.682 [2024-07-15 13:17:22.927872] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:10.682 [2024-07-15 13:17:22.927883] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:33:10.682 [2024-07-15 13:17:22.927890] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:33:10.682 [2024-07-15 13:17:22.927914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.927922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.927926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.682 [2024-07-15 13:17:22.927947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.682 [2024-07-15 13:17:22.928003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.682 [2024-07-15 13:17:22.928089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.682 [2024-07-15 13:17:22.928097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.682 [2024-07-15 13:17:22.928102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.682 [2024-07-15 13:17:22.928113] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:33:10.682 [2024-07-15 13:17:22.928121] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:33:10.682 [2024-07-15 13:17:22.928131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.682 [2024-07-15 13:17:22.928154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.682 [2024-07-15 13:17:22.928180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.682 [2024-07-15 13:17:22.928234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.682 [2024-07-15 13:17:22.928244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.682 [2024-07-15 13:17:22.928252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.682 [2024-07-15 13:17:22.928268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:33:10.682 [2024-07-15 13:17:22.928284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:33:10.682 [2024-07-15 13:17:22.928297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.682 [2024-07-15 13:17:22.928316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.682 [2024-07-15 13:17:22.928342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.682 [2024-07-15 13:17:22.928394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.682 [2024-07-15 13:17:22.928401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.682 [2024-07-15 13:17:22.928405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.682 [2024-07-15 13:17:22.928416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:10.682 [2024-07-15 13:17:22.928428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.682 [2024-07-15 13:17:22.928445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.682 [2024-07-15 13:17:22.928465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.682 [2024-07-15 13:17:22.928507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.682 [2024-07-15 13:17:22.928516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.682 [2024-07-15 13:17:22.928520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.682 [2024-07-15 13:17:22.928530] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:33:10.682 [2024-07-15 13:17:22.928536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:33:10.682 [2024-07-15 13:17:22.928545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:10.682 [2024-07-15 13:17:22.928651] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:33:10.682 [2024-07-15 13:17:22.928655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:10.682 [2024-07-15 13:17:22.928666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.682 [2024-07-15 13:17:22.928683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.682 [2024-07-15 13:17:22.928704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.682 [2024-07-15 13:17:22.928752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.682 [2024-07-15 13:17:22.928784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.682 [2024-07-15 13:17:22.928790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.682 [2024-07-15 13:17:22.928801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:10.682 [2024-07-15 13:17:22.928814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.682 [2024-07-15 13:17:22.928833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.682 [2024-07-15 13:17:22.928860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.682 [2024-07-15 13:17:22.928916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.682 [2024-07-15 13:17:22.928923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.682 [2024-07-15 13:17:22.928927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.682 [2024-07-15 13:17:22.928931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.682 [2024-07-15 13:17:22.928937] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:10.682 [2024-07-15 13:17:22.928942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:33:10.682 [2024-07-15 13:17:22.928951] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:33:10.682 [2024-07-15 13:17:22.928963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.928976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.928981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.928989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.683 [2024-07-15 13:17:22.929010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.683 [2024-07-15 13:17:22.929124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.683 [2024-07-15 13:17:22.929141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.683 [2024-07-15 13:17:22.929147] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929151] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f9a60): datao=0, datal=4096, cccid=0 00:33:10.683 [2024-07-15 13:17:22.929157] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163c840) on tqpair(0x15f9a60): expected_datao=0, payload_size=4096 00:33:10.683 [2024-07-15 13:17:22.929162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929172] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929177] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.683 [2024-07-15 13:17:22.929193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.683 [2024-07-15 13:17:22.929197] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.683 [2024-07-15 13:17:22.929211] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:33:10.683 [2024-07-15 13:17:22.929217] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:33:10.683 [2024-07-15 13:17:22.929223] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:33:10.683 [2024-07-15 13:17:22.929227] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:33:10.683 [2024-07-15 13:17:22.929232] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:33:10.683 [2024-07-15 13:17:22.929240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.929261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.929278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.929298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:10.683 [2024-07-15 13:17:22.929325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.683 [2024-07-15 13:17:22.929390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.683 [2024-07-15 13:17:22.929398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.683 [2024-07-15 13:17:22.929402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.683 [2024-07-15 13:17:22.929415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.929431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.683 [2024-07-15 13:17:22.929438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.929452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.683 [2024-07-15 13:17:22.929459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.929473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.683 [2024-07-15 13:17:22.929480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.929495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.683 [2024-07-15 13:17:22.929500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.929515] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.929524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.929536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.683 [2024-07-15 13:17:22.929559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c840, cid 0, qid 0 00:33:10.683 [2024-07-15 13:17:22.929566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163c9c0, cid 1, qid 0 00:33:10.683 [2024-07-15 13:17:22.929571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163cb40, cid 2, qid 0 00:33:10.683 [2024-07-15 13:17:22.929577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.683 [2024-07-15 13:17:22.929582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ce40, cid 4, qid 0 00:33:10.683 [2024-07-15 13:17:22.929669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.683 [2024-07-15 13:17:22.929676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.683 [2024-07-15 13:17:22.929680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ce40) on tqpair=0x15f9a60 00:33:10.683 [2024-07-15 13:17:22.929690] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:33:10.683 [2024-07-15 13:17:22.929700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.929710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.929717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.929728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.929778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:10.683 [2024-07-15 13:17:22.929814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ce40, cid 4, qid 0 00:33:10.683 [2024-07-15 13:17:22.929875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.683 [2024-07-15 13:17:22.929887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.683 [2024-07-15 13:17:22.929892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ce40) on tqpair=0x15f9a60 00:33:10.683 [2024-07-15 13:17:22.929964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.929977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.929987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.929992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.930000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.683 [2024-07-15 13:17:22.930023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ce40, cid 4, qid 0 00:33:10.683 [2024-07-15 13:17:22.930080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.683 [2024-07-15 13:17:22.930087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.683 [2024-07-15 13:17:22.930091] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.930096] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f9a60): datao=0, datal=4096, cccid=4 00:33:10.683 [2024-07-15 13:17:22.930101] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163ce40) on tqpair(0x15f9a60): expected_datao=0, payload_size=4096 00:33:10.683 [2024-07-15 13:17:22.930106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.930114] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.930118] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.930127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.683 [2024-07-15 13:17:22.930133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.683 [2024-07-15 13:17:22.930137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.930142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ce40) on tqpair=0x15f9a60 00:33:10.683 [2024-07-15 13:17:22.930159] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:33:10.683 [2024-07-15 13:17:22.930170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.930182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:33:10.683 [2024-07-15 13:17:22.930190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.930195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f9a60) 00:33:10.683 [2024-07-15 13:17:22.930203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.683 [2024-07-15 13:17:22.930225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ce40, cid 4, qid 0 00:33:10.683 [2024-07-15 13:17:22.930297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.683 [2024-07-15 13:17:22.930310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.683 [2024-07-15 13:17:22.930314] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.683 [2024-07-15 13:17:22.930318] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f9a60): datao=0, datal=4096, cccid=4 00:33:10.684 [2024-07-15 13:17:22.930324] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163ce40) on tqpair(0x15f9a60): expected_datao=0, payload_size=4096 00:33:10.684 [2024-07-15 13:17:22.930328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930336] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930341] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.684 [2024-07-15 13:17:22.930356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.684 [2024-07-15 13:17:22.930360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ce40) on tqpair=0x15f9a60 00:33:10.684 [2024-07-15 13:17:22.930383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:33:10.684 [2024-07-15 13:17:22.930395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:33:10.684 [2024-07-15 13:17:22.930405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.930419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.684 [2024-07-15 13:17:22.930445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ce40, cid 4, qid 0 00:33:10.684 [2024-07-15 13:17:22.930506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.684 [2024-07-15 13:17:22.930512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.684 [2024-07-15 13:17:22.930516] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930520] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f9a60): datao=0, datal=4096, cccid=4 00:33:10.684 [2024-07-15 13:17:22.930525] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163ce40) on tqpair(0x15f9a60): expected_datao=0, payload_size=4096 00:33:10.684 [2024-07-15 13:17:22.930530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930538] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930542] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.684 [2024-07-15 13:17:22.930557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.684 [2024-07-15 13:17:22.930561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ce40) on tqpair=0x15f9a60 00:33:10.684 [2024-07-15 13:17:22.930575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:33:10.684 [2024-07-15 13:17:22.930584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:33:10.684 [2024-07-15 13:17:22.930595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:33:10.684 [2024-07-15 13:17:22.930603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:33:10.684 [2024-07-15 13:17:22.930609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:33:10.684 [2024-07-15 13:17:22.930615] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:33:10.684 [2024-07-15 13:17:22.930621] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:33:10.684 [2024-07-15 13:17:22.930626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:33:10.684 [2024-07-15 13:17:22.930632] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:33:10.684 [2024-07-15 13:17:22.930652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.930666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.684 [2024-07-15 13:17:22.930674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.930689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.684 [2024-07-15 13:17:22.930716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ce40, cid 4, qid 0 00:33:10.684 [2024-07-15 13:17:22.930726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163cfc0, cid 5, qid 0 00:33:10.684 [2024-07-15 13:17:22.930819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.684 [2024-07-15 13:17:22.930832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.684 [2024-07-15 13:17:22.930836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ce40) on tqpair=0x15f9a60 00:33:10.684 [2024-07-15 13:17:22.930848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.684 [2024-07-15 13:17:22.930854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.684 [2024-07-15 13:17:22.930858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163cfc0) on tqpair=0x15f9a60 00:33:10.684 [2024-07-15 13:17:22.930875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.930889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.684 [2024-07-15 13:17:22.930915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163cfc0, cid 5, qid 0 00:33:10.684 [2024-07-15 13:17:22.930974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.684 [2024-07-15 13:17:22.930981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.684 [2024-07-15 13:17:22.930985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.930989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163cfc0) on tqpair=0x15f9a60 00:33:10.684 [2024-07-15 13:17:22.931000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.931013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.684 [2024-07-15 13:17:22.931033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163cfc0, cid 5, qid 0 00:33:10.684 [2024-07-15 13:17:22.931077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.684 [2024-07-15 13:17:22.931084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.684 [2024-07-15 13:17:22.931088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163cfc0) on tqpair=0x15f9a60 00:33:10.684 [2024-07-15 13:17:22.931103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.931116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.684 [2024-07-15 13:17:22.931135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163cfc0, cid 5, qid 0 00:33:10.684 [2024-07-15 13:17:22.931188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.684 [2024-07-15 13:17:22.931195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.684 [2024-07-15 13:17:22.931199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163cfc0) on tqpair=0x15f9a60 00:33:10.684 [2024-07-15 13:17:22.931225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.931242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.684 [2024-07-15 13:17:22.931265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.931286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.684 [2024-07-15 13:17:22.931296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.931308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.684 [2024-07-15 13:17:22.931322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15f9a60) 00:33:10.684 [2024-07-15 13:17:22.931335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.684 [2024-07-15 13:17:22.931362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163cfc0, cid 5, qid 0 00:33:10.684 [2024-07-15 13:17:22.931370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ce40, cid 4, qid 0 00:33:10.684 [2024-07-15 13:17:22.931375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163d140, cid 6, qid 0 00:33:10.684 [2024-07-15 13:17:22.931380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163d2c0, cid 7, qid 0 00:33:10.684 [2024-07-15 13:17:22.931516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.684 [2024-07-15 13:17:22.931530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.684 [2024-07-15 13:17:22.931535] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931539] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f9a60): datao=0, datal=8192, cccid=5 00:33:10.684 [2024-07-15 13:17:22.931544] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163cfc0) on tqpair(0x15f9a60): expected_datao=0, payload_size=8192 00:33:10.684 [2024-07-15 13:17:22.931549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931567] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931572] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.684 [2024-07-15 13:17:22.931584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.684 [2024-07-15 13:17:22.931588] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931592] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f9a60): datao=0, datal=512, cccid=4 00:33:10.684 [2024-07-15 13:17:22.931597] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163ce40) on tqpair(0x15f9a60): expected_datao=0, payload_size=512 00:33:10.684 [2024-07-15 13:17:22.931602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931622] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.684 [2024-07-15 13:17:22.931627] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.685 [2024-07-15 13:17:22.931640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.685 [2024-07-15 13:17:22.931643] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931647] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f9a60): datao=0, datal=512, cccid=6 00:33:10.685 [2024-07-15 13:17:22.931652] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163d140) on tqpair(0x15f9a60): expected_datao=0, payload_size=512 00:33:10.685 [2024-07-15 13:17:22.931657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931664] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931668] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:10.685 [2024-07-15 13:17:22.931680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:10.685 [2024-07-15 13:17:22.931683] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931687] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f9a60): datao=0, datal=4096, cccid=7 00:33:10.685 [2024-07-15 13:17:22.931692] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x163d2c0) on tqpair(0x15f9a60): expected_datao=0, payload_size=4096 00:33:10.685 [2024-07-15 13:17:22.931697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931704] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931708] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.685 [2024-07-15 13:17:22.931726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.685 [2024-07-15 13:17:22.931732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163cfc0) on tqpair=0x15f9a60 00:33:10.685 [2024-07-15 13:17:22.931793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.685 [2024-07-15 13:17:22.931804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.685 [2024-07-15 13:17:22.931808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ce40) on tqpair=0x15f9a60 00:33:10.685 [2024-07-15 13:17:22.931826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.685 [2024-07-15 13:17:22.931833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.685 [2024-07-15 13:17:22.931837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163d140) on tqpair=0x15f9a60 00:33:10.685 [2024-07-15 13:17:22.931849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.685 [2024-07-15 13:17:22.931856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.685 [2024-07-15 13:17:22.931859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.685 [2024-07-15 13:17:22.931864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163d2c0) on tqpair=0x15f9a60 00:33:10.685 ===================================================== 00:33:10.685 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:10.685 ===================================================== 00:33:10.685 Controller Capabilities/Features 00:33:10.685 ================================ 00:33:10.685 Vendor ID: 8086 00:33:10.685 Subsystem Vendor ID: 8086 00:33:10.685 Serial Number: SPDK00000000000001 00:33:10.685 Model Number: SPDK bdev Controller 00:33:10.685 Firmware Version: 24.09 00:33:10.685 Recommended Arb Burst: 6 00:33:10.685 IEEE OUI Identifier: e4 d2 5c 00:33:10.685 Multi-path I/O 00:33:10.685 May have multiple subsystem ports: Yes 00:33:10.685 May have multiple controllers: Yes 00:33:10.685 Associated with SR-IOV VF: No 00:33:10.685 Max Data Transfer Size: 131072 00:33:10.685 Max Number of Namespaces: 32 00:33:10.685 Max Number of I/O Queues: 127 00:33:10.685 NVMe Specification Version (VS): 1.3 00:33:10.685 NVMe Specification Version (Identify): 1.3 00:33:10.685 Maximum Queue Entries: 128 00:33:10.685 Contiguous Queues Required: Yes 00:33:10.685 Arbitration Mechanisms Supported 00:33:10.685 Weighted Round Robin: Not Supported 00:33:10.685 Vendor Specific: Not Supported 00:33:10.685 Reset Timeout: 15000 ms 00:33:10.685 Doorbell Stride: 4 bytes 00:33:10.685 NVM Subsystem Reset: Not Supported 00:33:10.685 Command Sets Supported 00:33:10.685 NVM Command Set: Supported 00:33:10.685 Boot Partition: Not Supported 00:33:10.685 Memory Page Size Minimum: 4096 bytes 00:33:10.685 Memory Page Size Maximum: 4096 bytes 00:33:10.685 Persistent Memory Region: Not Supported 00:33:10.685 Optional Asynchronous Events Supported 00:33:10.685 Namespace Attribute Notices: Supported 00:33:10.685 Firmware Activation Notices: Not Supported 00:33:10.685 ANA Change Notices: Not Supported 00:33:10.685 PLE Aggregate Log Change Notices: Not Supported 00:33:10.685 LBA Status Info Alert Notices: Not Supported 00:33:10.685 EGE Aggregate Log Change Notices: Not Supported 00:33:10.685 Normal NVM Subsystem Shutdown event: Not Supported 00:33:10.685 Zone Descriptor Change Notices: Not Supported 00:33:10.685 Discovery Log Change Notices: Not Supported 00:33:10.685 Controller Attributes 00:33:10.685 128-bit Host Identifier: Supported 00:33:10.685 Non-Operational Permissive Mode: Not Supported 00:33:10.685 NVM Sets: Not Supported 00:33:10.685 Read Recovery Levels: Not Supported 00:33:10.685 Endurance Groups: Not Supported 00:33:10.685 Predictable Latency Mode: Not Supported 00:33:10.685 Traffic Based Keep ALive: Not Supported 00:33:10.685 Namespace Granularity: Not Supported 00:33:10.685 SQ Associations: Not Supported 00:33:10.685 UUID List: Not Supported 00:33:10.685 Multi-Domain Subsystem: Not Supported 00:33:10.685 Fixed Capacity Management: Not Supported 00:33:10.685 Variable Capacity Management: Not Supported 00:33:10.685 Delete Endurance Group: Not Supported 00:33:10.685 Delete NVM Set: Not Supported 00:33:10.685 Extended LBA Formats Supported: Not Supported 00:33:10.685 Flexible Data Placement Supported: Not Supported 00:33:10.685 00:33:10.685 Controller Memory Buffer Support 00:33:10.685 ================================ 00:33:10.685 Supported: No 00:33:10.685 00:33:10.685 Persistent Memory Region Support 00:33:10.685 ================================ 00:33:10.685 Supported: No 00:33:10.685 00:33:10.685 Admin Command Set Attributes 00:33:10.685 ============================ 00:33:10.685 Security Send/Receive: Not Supported 00:33:10.685 Format NVM: Not Supported 00:33:10.685 Firmware Activate/Download: Not Supported 00:33:10.685 Namespace Management: Not Supported 00:33:10.685 Device Self-Test: Not Supported 00:33:10.685 Directives: Not Supported 00:33:10.685 NVMe-MI: Not Supported 00:33:10.685 Virtualization Management: Not Supported 00:33:10.685 Doorbell Buffer Config: Not Supported 00:33:10.685 Get LBA Status Capability: Not Supported 00:33:10.685 Command & Feature Lockdown Capability: Not Supported 00:33:10.685 Abort Command Limit: 4 00:33:10.685 Async Event Request Limit: 4 00:33:10.685 Number of Firmware Slots: N/A 00:33:10.685 Firmware Slot 1 Read-Only: N/A 00:33:10.685 Firmware Activation Without Reset: N/A 00:33:10.685 Multiple Update Detection Support: N/A 00:33:10.685 Firmware Update Granularity: No Information Provided 00:33:10.685 Per-Namespace SMART Log: No 00:33:10.685 Asymmetric Namespace Access Log Page: Not Supported 00:33:10.685 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:33:10.685 Command Effects Log Page: Supported 00:33:10.685 Get Log Page Extended Data: Supported 00:33:10.685 Telemetry Log Pages: Not Supported 00:33:10.685 Persistent Event Log Pages: Not Supported 00:33:10.685 Supported Log Pages Log Page: May Support 00:33:10.685 Commands Supported & Effects Log Page: Not Supported 00:33:10.685 Feature Identifiers & Effects Log Page:May Support 00:33:10.685 NVMe-MI Commands & Effects Log Page: May Support 00:33:10.685 Data Area 4 for Telemetry Log: Not Supported 00:33:10.685 Error Log Page Entries Supported: 128 00:33:10.685 Keep Alive: Supported 00:33:10.685 Keep Alive Granularity: 10000 ms 00:33:10.685 00:33:10.685 NVM Command Set Attributes 00:33:10.685 ========================== 00:33:10.685 Submission Queue Entry Size 00:33:10.685 Max: 64 00:33:10.685 Min: 64 00:33:10.685 Completion Queue Entry Size 00:33:10.685 Max: 16 00:33:10.685 Min: 16 00:33:10.685 Number of Namespaces: 32 00:33:10.685 Compare Command: Supported 00:33:10.685 Write Uncorrectable Command: Not Supported 00:33:10.685 Dataset Management Command: Supported 00:33:10.685 Write Zeroes Command: Supported 00:33:10.685 Set Features Save Field: Not Supported 00:33:10.685 Reservations: Supported 00:33:10.685 Timestamp: Not Supported 00:33:10.685 Copy: Supported 00:33:10.685 Volatile Write Cache: Present 00:33:10.685 Atomic Write Unit (Normal): 1 00:33:10.685 Atomic Write Unit (PFail): 1 00:33:10.685 Atomic Compare & Write Unit: 1 00:33:10.685 Fused Compare & Write: Supported 00:33:10.685 Scatter-Gather List 00:33:10.685 SGL Command Set: Supported 00:33:10.685 SGL Keyed: Supported 00:33:10.685 SGL Bit Bucket Descriptor: Not Supported 00:33:10.685 SGL Metadata Pointer: Not Supported 00:33:10.685 Oversized SGL: Not Supported 00:33:10.685 SGL Metadata Address: Not Supported 00:33:10.685 SGL Offset: Supported 00:33:10.685 Transport SGL Data Block: Not Supported 00:33:10.685 Replay Protected Memory Block: Not Supported 00:33:10.685 00:33:10.685 Firmware Slot Information 00:33:10.685 ========================= 00:33:10.685 Active slot: 1 00:33:10.685 Slot 1 Firmware Revision: 24.09 00:33:10.685 00:33:10.685 00:33:10.685 Commands Supported and Effects 00:33:10.685 ============================== 00:33:10.685 Admin Commands 00:33:10.685 -------------- 00:33:10.686 Get Log Page (02h): Supported 00:33:10.686 Identify (06h): Supported 00:33:10.686 Abort (08h): Supported 00:33:10.686 Set Features (09h): Supported 00:33:10.686 Get Features (0Ah): Supported 00:33:10.686 Asynchronous Event Request (0Ch): Supported 00:33:10.686 Keep Alive (18h): Supported 00:33:10.686 I/O Commands 00:33:10.686 ------------ 00:33:10.686 Flush (00h): Supported LBA-Change 00:33:10.686 Write (01h): Supported LBA-Change 00:33:10.686 Read (02h): Supported 00:33:10.686 Compare (05h): Supported 00:33:10.686 Write Zeroes (08h): Supported LBA-Change 00:33:10.686 Dataset Management (09h): Supported LBA-Change 00:33:10.686 Copy (19h): Supported LBA-Change 00:33:10.686 00:33:10.686 Error Log 00:33:10.686 ========= 00:33:10.686 00:33:10.686 Arbitration 00:33:10.686 =========== 00:33:10.686 Arbitration Burst: 1 00:33:10.686 00:33:10.686 Power Management 00:33:10.686 ================ 00:33:10.686 Number of Power States: 1 00:33:10.686 Current Power State: Power State #0 00:33:10.686 Power State #0: 00:33:10.686 Max Power: 0.00 W 00:33:10.686 Non-Operational State: Operational 00:33:10.686 Entry Latency: Not Reported 00:33:10.686 Exit Latency: Not Reported 00:33:10.686 Relative Read Throughput: 0 00:33:10.686 Relative Read Latency: 0 00:33:10.686 Relative Write Throughput: 0 00:33:10.686 Relative Write Latency: 0 00:33:10.686 Idle Power: Not Reported 00:33:10.686 Active Power: Not Reported 00:33:10.686 Non-Operational Permissive Mode: Not Supported 00:33:10.686 00:33:10.686 Health Information 00:33:10.686 ================== 00:33:10.686 Critical Warnings: 00:33:10.686 Available Spare Space: OK 00:33:10.686 Temperature: OK 00:33:10.686 Device Reliability: OK 00:33:10.686 Read Only: No 00:33:10.686 Volatile Memory Backup: OK 00:33:10.686 Current Temperature: 0 Kelvin (-273 Celsius) 00:33:10.686 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:33:10.686 Available Spare: 0% 00:33:10.686 Available Spare Threshold: 0% 00:33:10.686 Life Percentage Used:[2024-07-15 13:17:22.931985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.931994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15f9a60) 00:33:10.686 [2024-07-15 13:17:22.932003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.686 [2024-07-15 13:17:22.932033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163d2c0, cid 7, qid 0 00:33:10.686 [2024-07-15 13:17:22.932090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.686 [2024-07-15 13:17:22.932098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.686 [2024-07-15 13:17:22.932102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163d2c0) on tqpair=0x15f9a60 00:33:10.686 [2024-07-15 13:17:22.932148] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:33:10.686 [2024-07-15 13:17:22.932161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c840) on tqpair=0x15f9a60 00:33:10.686 [2024-07-15 13:17:22.932168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.686 [2024-07-15 13:17:22.932174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163c9c0) on tqpair=0x15f9a60 00:33:10.686 [2024-07-15 13:17:22.932179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.686 [2024-07-15 13:17:22.932185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163cb40) on tqpair=0x15f9a60 00:33:10.686 [2024-07-15 13:17:22.932190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.686 [2024-07-15 13:17:22.932195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.686 [2024-07-15 13:17:22.932200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.686 [2024-07-15 13:17:22.932210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.686 [2024-07-15 13:17:22.932227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.686 [2024-07-15 13:17:22.932264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.686 [2024-07-15 13:17:22.932311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.686 [2024-07-15 13:17:22.932323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.686 [2024-07-15 13:17:22.932327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932331] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.686 [2024-07-15 13:17:22.932341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.686 [2024-07-15 13:17:22.932359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.686 [2024-07-15 13:17:22.932387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.686 [2024-07-15 13:17:22.932511] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.686 [2024-07-15 13:17:22.932528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.686 [2024-07-15 13:17:22.932533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.686 [2024-07-15 13:17:22.932544] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:33:10.686 [2024-07-15 13:17:22.932550] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:33:10.686 [2024-07-15 13:17:22.932562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.686 [2024-07-15 13:17:22.932579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.686 [2024-07-15 13:17:22.932601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.686 [2024-07-15 13:17:22.932657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.686 [2024-07-15 13:17:22.932664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.686 [2024-07-15 13:17:22.932668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.686 [2024-07-15 13:17:22.932684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.686 [2024-07-15 13:17:22.932702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.686 [2024-07-15 13:17:22.932722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.686 [2024-07-15 13:17:22.932790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.686 [2024-07-15 13:17:22.932803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.686 [2024-07-15 13:17:22.932808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.686 [2024-07-15 13:17:22.932812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.932826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.932831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.932836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.932844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.932871] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.932924] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.932932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.932935] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.932940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.932951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.932956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.932960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.932968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.932988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.933036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.933049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.933053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.933070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.933087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.933108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.933159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.933170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.933175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.933191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.933208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.933229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.933284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.933303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.933308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.933326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.933344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.933370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.933411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.933418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.933421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.933437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.933454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.933474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.933524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.933531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.933535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.933550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.933567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.933586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.933636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.933648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.933652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.933668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.933686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.933706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.933777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.933808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.933821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.933846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.933865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.933893] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.933961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.933968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.933972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.933988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.933997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.934005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.934025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.934082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.934094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.934099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.934103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.934115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.934121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.934125] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.934133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.934153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.934209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.934224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.934229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.934233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.934254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.934268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.934277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.934286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.934313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.687 [2024-07-15 13:17:22.934379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.687 [2024-07-15 13:17:22.934386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.687 [2024-07-15 13:17:22.934390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.934394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.687 [2024-07-15 13:17:22.934406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.934411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.687 [2024-07-15 13:17:22.934415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.687 [2024-07-15 13:17:22.934423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.687 [2024-07-15 13:17:22.934443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.688 [2024-07-15 13:17:22.934525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.688 [2024-07-15 13:17:22.934532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.688 [2024-07-15 13:17:22.934536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.934540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.688 [2024-07-15 13:17:22.934551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.934556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.934560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.688 [2024-07-15 13:17:22.934568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.688 [2024-07-15 13:17:22.934587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.688 [2024-07-15 13:17:22.934669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.688 [2024-07-15 13:17:22.934685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.688 [2024-07-15 13:17:22.934693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.934700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.688 [2024-07-15 13:17:22.934720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.934729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.934737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.688 [2024-07-15 13:17:22.934750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.688 [2024-07-15 13:17:22.934805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.688 [2024-07-15 13:17:22.934876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.688 [2024-07-15 13:17:22.934891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.688 [2024-07-15 13:17:22.934898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.934905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.688 [2024-07-15 13:17:22.934924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.934932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.934938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.688 [2024-07-15 13:17:22.934950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.688 [2024-07-15 13:17:22.934984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.688 [2024-07-15 13:17:22.935039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.688 [2024-07-15 13:17:22.935050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.688 [2024-07-15 13:17:22.935056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.688 [2024-07-15 13:17:22.935078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.688 [2024-07-15 13:17:22.935103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.688 [2024-07-15 13:17:22.935138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.688 [2024-07-15 13:17:22.935204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.688 [2024-07-15 13:17:22.935224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.688 [2024-07-15 13:17:22.935233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.688 [2024-07-15 13:17:22.935260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.688 [2024-07-15 13:17:22.935287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.688 [2024-07-15 13:17:22.935324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.688 [2024-07-15 13:17:22.935394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.688 [2024-07-15 13:17:22.935414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.688 [2024-07-15 13:17:22.935423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.688 [2024-07-15 13:17:22.935450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.688 [2024-07-15 13:17:22.935478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.688 [2024-07-15 13:17:22.935513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.688 [2024-07-15 13:17:22.935586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.688 [2024-07-15 13:17:22.935603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.688 [2024-07-15 13:17:22.935621] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.688 [2024-07-15 13:17:22.935647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.935661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.688 [2024-07-15 13:17:22.935672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.688 [2024-07-15 13:17:22.935705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.688 [2024-07-15 13:17:22.939798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.688 [2024-07-15 13:17:22.939840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.688 [2024-07-15 13:17:22.939851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.939859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.688 [2024-07-15 13:17:22.939889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.939900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.939906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f9a60) 00:33:10.688 [2024-07-15 13:17:22.939918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.688 [2024-07-15 13:17:22.939954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x163ccc0, cid 3, qid 0 00:33:10.688 [2024-07-15 13:17:22.940049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:10.688 [2024-07-15 13:17:22.940063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:10.688 [2024-07-15 13:17:22.940067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:10.688 [2024-07-15 13:17:22.940072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x163ccc0) on tqpair=0x15f9a60 00:33:10.688 [2024-07-15 13:17:22.940082] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:33:10.688 0% 00:33:10.688 Data Units Read: 0 00:33:10.688 Data Units Written: 0 00:33:10.688 Host Read Commands: 0 00:33:10.688 Host Write Commands: 0 00:33:10.688 Controller Busy Time: 0 minutes 00:33:10.688 Power Cycles: 0 00:33:10.688 Power On Hours: 0 hours 00:33:10.688 Unsafe Shutdowns: 0 00:33:10.688 Unrecoverable Media Errors: 0 00:33:10.688 Lifetime Error Log Entries: 0 00:33:10.688 Warning Temperature Time: 0 minutes 00:33:10.688 Critical Temperature Time: 0 minutes 00:33:10.688 00:33:10.688 Number of Queues 00:33:10.688 ================ 00:33:10.688 Number of I/O Submission Queues: 127 00:33:10.688 Number of I/O Completion Queues: 127 00:33:10.688 00:33:10.688 Active Namespaces 00:33:10.688 ================= 00:33:10.688 Namespace ID:1 00:33:10.688 Error Recovery Timeout: Unlimited 00:33:10.688 Command Set Identifier: NVM (00h) 00:33:10.688 Deallocate: Supported 00:33:10.688 Deallocated/Unwritten Error: Not Supported 00:33:10.688 Deallocated Read Value: Unknown 00:33:10.688 Deallocate in Write Zeroes: Not Supported 00:33:10.688 Deallocated Guard Field: 0xFFFF 00:33:10.688 Flush: Supported 00:33:10.688 Reservation: Supported 00:33:10.688 Namespace Sharing Capabilities: Multiple Controllers 00:33:10.688 Size (in LBAs): 131072 (0GiB) 00:33:10.688 Capacity (in LBAs): 131072 (0GiB) 00:33:10.688 Utilization (in LBAs): 131072 (0GiB) 00:33:10.688 NGUID: ABCDEF0123456789ABCDEF0123456789 00:33:10.688 EUI64: ABCDEF0123456789 00:33:10.688 UUID: 8a28aa3b-ee56-49db-a9c7-d30cc9ba1b14 00:33:10.688 Thin Provisioning: Not Supported 00:33:10.688 Per-NS Atomic Units: Yes 00:33:10.688 Atomic Boundary Size (Normal): 0 00:33:10.688 Atomic Boundary Size (PFail): 0 00:33:10.688 Atomic Boundary Offset: 0 00:33:10.688 Maximum Single Source Range Length: 65535 00:33:10.688 Maximum Copy Length: 65535 00:33:10.688 Maximum Source Range Count: 1 00:33:10.688 NGUID/EUI64 Never Reused: No 00:33:10.688 Namespace Write Protected: No 00:33:10.688 Number of LBA Formats: 1 00:33:10.688 Current LBA Format: LBA Format #00 00:33:10.688 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:10.688 00:33:10.688 13:17:22 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@51 -- # sync 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@492 -- # nvmfcleanup 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.688 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.689 rmmod nvme_tcp 00:33:10.689 rmmod nvme_fabrics 00:33:10.689 rmmod nvme_keyring 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@493 -- # '[' -n 118567 ']' 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@494 -- # killprocess 118567 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 118567 ']' 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 118567 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118567 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:10.689 killing process with pid 118567 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118567' 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@967 -- # kill 118567 00:33:10.689 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@972 -- # wait 118567 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@282 -- # remove_spdk_ns 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:33:10.949 00:33:10.949 real 0m1.935s 00:33:10.949 user 0m3.043s 00:33:10.949 sys 0m0.811s 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:10.949 ************************************ 00:33:10.949 END TEST nvmf_identify 00:33:10.949 ************************************ 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@102 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:10.949 ************************************ 00:33:10.949 START TEST nvmf_perf 00:33:10.949 ************************************ 00:33:10.949 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:33:11.206 * Looking for test storage... 00:33:11.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@452 -- # prepare_net_devs 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@414 -- # local -g is_hw=no 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@416 -- # remove_spdk_ns 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@436 -- # nvmf_veth_init 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:33:11.206 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:33:11.207 Cannot find device "nvmf_tgt_br" 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@159 -- # true 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:33:11.207 Cannot find device "nvmf_tgt_br2" 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@160 -- # true 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:33:11.207 Cannot find device "nvmf_tgt_br" 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@162 -- # true 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:33:11.207 Cannot find device "nvmf_tgt_br2" 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@163 -- # true 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:11.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@166 -- # true 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:11.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@167 -- # true 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:11.207 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:33:11.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:33:11.465 00:33:11.465 --- 10.0.0.2 ping statistics --- 00:33:11.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.465 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:33:11.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:11.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:33:11.465 00:33:11.465 --- 10.0.0.3 ping statistics --- 00:33:11.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.465 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:11.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:33:11.465 00:33:11.465 --- 10.0.0.1 ping statistics --- 00:33:11.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.465 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@437 -- # return 0 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@485 -- # nvmfpid=118775 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@486 -- # waitforlisten 118775 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 118775 ']' 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:11.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:11.465 13:17:23 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:11.465 [2024-07-15 13:17:23.821331] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:11.466 [2024-07-15 13:17:23.822530] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:11.466 [2024-07-15 13:17:23.822590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.723 [2024-07-15 13:17:23.957821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:11.723 [2024-07-15 13:17:24.016816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.723 [2024-07-15 13:17:24.017073] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.723 [2024-07-15 13:17:24.017159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.723 [2024-07-15 13:17:24.017236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.723 [2024-07-15 13:17:24.017312] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.723 [2024-07-15 13:17:24.017474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.723 [2024-07-15 13:17:24.017547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:11.723 [2024-07-15 13:17:24.017678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:11.723 [2024-07-15 13:17:24.017800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.723 [2024-07-15 13:17:24.076447] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:11.723 [2024-07-15 13:17:24.076618] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:11.723 [2024-07-15 13:17:24.076697] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:11.723 [2024-07-15 13:17:24.077345] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:11.723 [2024-07-15 13:17:24.077844] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:12.653 13:17:24 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:12.653 13:17:24 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:33:12.653 13:17:24 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:33:12.653 13:17:24 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:12.653 13:17:24 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:12.653 13:17:24 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.653 13:17:24 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:12.653 13:17:24 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:33:12.910 13:17:25 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:33:12.910 13:17:25 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:33:13.166 13:17:25 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:33:13.166 13:17:25 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:13.423 13:17:25 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:33:13.423 13:17:25 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:33:13.423 13:17:25 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:33:13.423 13:17:25 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:33:13.423 13:17:25 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:33:13.694 [2024-07-15 13:17:26.138744] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.968 13:17:26 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:14.225 13:17:26 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:14.225 13:17:26 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.482 13:17:26 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:14.482 13:17:26 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:15.049 13:17:27 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.306 [2024-07-15 13:17:27.619056] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.306 13:17:27 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:15.564 13:17:27 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:33:15.564 13:17:27 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:33:15.564 13:17:27 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:33:15.564 13:17:27 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:33:16.936 Initializing NVMe Controllers 00:33:16.936 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:33:16.936 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:33:16.936 Initialization complete. Launching workers. 00:33:16.936 ======================================================== 00:33:16.936 Latency(us) 00:33:16.936 Device Information : IOPS MiB/s Average min max 00:33:16.936 PCIE (0000:00:10.0) NSID 1 from core 0: 26359.42 102.97 1213.90 70.49 15041.24 00:33:16.936 ======================================================== 00:33:16.936 Total : 26359.42 102.97 1213.90 70.49 15041.24 00:33:16.936 00:33:16.936 13:17:29 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:18.307 Initializing NVMe Controllers 00:33:18.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:18.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:18.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:18.307 Initialization complete. Launching workers. 00:33:18.307 ======================================================== 00:33:18.307 Latency(us) 00:33:18.307 Device Information : IOPS MiB/s Average min max 00:33:18.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2703.00 10.56 369.58 122.98 6094.02 00:33:18.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8177.85 4962.70 12051.09 00:33:18.307 ======================================================== 00:33:18.307 Total : 2826.00 11.04 709.43 122.98 12051.09 00:33:18.307 00:33:18.307 13:17:30 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:19.267 Initializing NVMe Controllers 00:33:19.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:19.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:19.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:19.267 Initialization complete. Launching workers. 00:33:19.267 ======================================================== 00:33:19.267 Latency(us) 00:33:19.267 Device Information : IOPS MiB/s Average min max 00:33:19.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8216.00 32.09 3895.81 784.30 9524.71 00:33:19.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2719.00 10.62 11884.18 4871.74 22837.71 00:33:19.267 ======================================================== 00:33:19.267 Total : 10935.00 42.71 5882.13 784.30 22837.71 00:33:19.267 00:33:19.525 13:17:31 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:33:19.525 13:17:31 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:22.052 Initializing NVMe Controllers 00:33:22.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:22.052 Controller IO queue size 128, less than required. 00:33:22.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:22.052 Controller IO queue size 128, less than required. 00:33:22.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:22.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:22.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:22.052 Initialization complete. Launching workers. 00:33:22.052 ======================================================== 00:33:22.052 Latency(us) 00:33:22.052 Device Information : IOPS MiB/s Average min max 00:33:22.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1037.16 259.29 127740.86 70358.49 214215.09 00:33:22.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.01 150.75 235683.19 91120.08 480179.87 00:33:22.052 ======================================================== 00:33:22.052 Total : 1640.18 410.04 167426.02 70358.49 480179.87 00:33:22.052 00:33:22.310 13:17:34 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:33:22.310 Initializing NVMe Controllers 00:33:22.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:22.310 Controller IO queue size 128, less than required. 00:33:22.310 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:22.310 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:33:22.310 Controller IO queue size 128, less than required. 00:33:22.310 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:22.310 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:33:22.310 WARNING: Some requested NVMe devices were skipped 00:33:22.310 No valid NVMe controllers or AIO or URING devices found 00:33:22.310 13:17:34 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:33:24.836 Initializing NVMe Controllers 00:33:24.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:24.836 Controller IO queue size 128, less than required. 00:33:24.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:24.836 Controller IO queue size 128, less than required. 00:33:24.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:24.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:24.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:24.836 Initialization complete. Launching workers. 00:33:24.836 00:33:24.836 ==================== 00:33:24.836 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:33:24.836 TCP transport: 00:33:24.836 polls: 5655 00:33:24.836 idle_polls: 3531 00:33:24.836 sock_completions: 2124 00:33:24.836 nvme_completions: 4361 00:33:24.836 submitted_requests: 6550 00:33:24.836 queued_requests: 1 00:33:24.836 00:33:24.836 ==================== 00:33:24.836 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:33:24.836 TCP transport: 00:33:24.836 polls: 6063 00:33:24.836 idle_polls: 3952 00:33:24.836 sock_completions: 2111 00:33:24.836 nvme_completions: 4229 00:33:24.836 submitted_requests: 6336 00:33:24.836 queued_requests: 1 00:33:24.836 ======================================================== 00:33:24.836 Latency(us) 00:33:24.836 Device Information : IOPS MiB/s Average min max 00:33:24.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1087.27 271.82 122760.35 73424.00 196156.58 00:33:24.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1054.35 263.59 124183.77 56237.98 177307.96 00:33:24.836 ======================================================== 00:33:24.836 Total : 2141.62 535.41 123461.12 56237.98 196156.58 00:33:24.836 00:33:25.094 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@66 -- # sync 00:33:25.094 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@492 -- # nvmfcleanup 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.693 rmmod nvme_tcp 00:33:25.693 rmmod nvme_fabrics 00:33:25.693 rmmod nvme_keyring 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@493 -- # '[' -n 118775 ']' 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@494 -- # killprocess 118775 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 118775 ']' 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 118775 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118775 00:33:25.693 killing process with pid 118775 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118775' 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@967 -- # kill 118775 00:33:25.693 13:17:37 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@972 -- # wait 118775 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@282 -- # remove_spdk_ns 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:33:26.627 ************************************ 00:33:26.627 END TEST nvmf_perf 00:33:26.627 ************************************ 00:33:26.627 00:33:26.627 real 0m15.488s 00:33:26.627 user 0m42.620s 00:33:26.627 sys 0m6.148s 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@103 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:26.627 ************************************ 00:33:26.627 START TEST nvmf_fio_host 00:33:26.627 ************************************ 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:26.627 * Looking for test storage... 00:33:26.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.627 13:17:38 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:26.627 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.627 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.627 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.627 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.627 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@452 -- # prepare_net_devs 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@414 -- # local -g is_hw=no 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@416 -- # remove_spdk_ns 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@436 -- # nvmf_veth_init 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:33:26.628 Cannot find device "nvmf_tgt_br" 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:33:26.628 Cannot find device "nvmf_tgt_br2" 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@160 -- # true 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:33:26.628 Cannot find device "nvmf_tgt_br" 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:33:26.628 Cannot find device "nvmf_tgt_br2" 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:33:26.628 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:26.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:26.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:26.885 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:26.886 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:26.886 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:33:26.886 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:33:26.886 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:33:26.886 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:26.886 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:33:27.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:33:27.143 00:33:27.143 --- 10.0.0.2 ping statistics --- 00:33:27.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.143 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:33:27.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:27.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:33:27.143 00:33:27.143 --- 10.0.0.3 ping statistics --- 00:33:27.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.143 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:27.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:33:27.143 00:33:27.143 --- 10.0.0.1 ping statistics --- 00:33:27.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.143 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@437 -- # return 0 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=119247 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 119247 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 119247 ']' 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:27.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:27.143 13:17:39 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.143 [2024-07-15 13:17:39.483417] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:27.143 [2024-07-15 13:17:39.485150] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:27.143 [2024-07-15 13:17:39.485242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.402 [2024-07-15 13:17:39.626417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:27.402 [2024-07-15 13:17:39.718436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.402 [2024-07-15 13:17:39.718550] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.402 [2024-07-15 13:17:39.718572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.402 [2024-07-15 13:17:39.718586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.402 [2024-07-15 13:17:39.718599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.402 [2024-07-15 13:17:39.718748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.402 [2024-07-15 13:17:39.719481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:27.402 [2024-07-15 13:17:39.719549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:27.402 [2024-07-15 13:17:39.719556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.402 [2024-07-15 13:17:39.792253] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:27.402 [2024-07-15 13:17:39.792381] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:27.402 [2024-07-15 13:17:39.793013] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:27.402 [2024-07-15 13:17:39.793363] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:27.402 [2024-07-15 13:17:39.793415] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:28.333 13:17:40 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:28.333 13:17:40 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:33:28.333 13:17:40 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:28.333 [2024-07-15 13:17:40.721213] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.333 13:17:40 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:28.333 13:17:40 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:28.333 13:17:40 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.333 13:17:40 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:28.898 Malloc1 00:33:28.898 13:17:41 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:29.156 13:17:41 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:29.414 13:17:41 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.979 [2024-07-15 13:17:42.161455] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.980 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:30.237 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:33:30.237 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:30.237 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:30.237 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:30.237 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:30.237 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:30.237 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:30.237 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:30.238 13:17:42 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:30.238 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:30.238 fio-3.35 00:33:30.238 Starting 1 thread 00:33:32.765 00:33:32.765 test: (groupid=0, jobs=1): err= 0: pid=119373: Mon Jul 15 13:17:44 2024 00:33:32.765 read: IOPS=8826, BW=34.5MiB/s (36.2MB/s)(69.2MiB/2007msec) 00:33:32.765 slat (usec): min=2, max=992, avg= 2.74, stdev= 8.24 00:33:32.765 clat (usec): min=4514, max=15946, avg=7640.15, stdev=1101.43 00:33:32.765 lat (usec): min=4518, max=15949, avg=7642.88, stdev=1101.38 00:33:32.765 clat percentiles (usec): 00:33:32.765 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:33:32.765 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:33:32.765 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[ 9503], 00:33:32.765 | 99.00th=[12125], 99.50th=[13042], 99.90th=[15270], 99.95th=[15926], 00:33:32.765 | 99.99th=[15926] 00:33:32.765 bw ( KiB/s): min=33352, max=36960, per=100.00%, avg=35306.00, stdev=1662.49, samples=4 00:33:32.765 iops : min= 8338, max= 9240, avg=8826.50, stdev=415.62, samples=4 00:33:32.765 write: IOPS=8840, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2007msec); 0 zone resets 00:33:32.765 slat (usec): min=2, max=289, avg= 2.79, stdev= 2.45 00:33:32.765 clat (usec): min=3415, max=15138, avg=6813.35, stdev=993.90 00:33:32.765 lat (usec): min=3428, max=15140, avg=6816.14, stdev=993.86 00:33:32.765 clat percentiles (usec): 00:33:32.765 | 1.00th=[ 5276], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6194], 00:33:32.765 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:33:32.765 | 70.00th=[ 6915], 80.00th=[ 7177], 90.00th=[ 7832], 95.00th=[ 8455], 00:33:32.765 | 99.00th=[10945], 99.50th=[11600], 99.90th=[14091], 99.95th=[14353], 00:33:32.765 | 99.99th=[14484] 00:33:32.765 bw ( KiB/s): min=33448, max=37288, per=99.98%, avg=35354.00, stdev=1670.94, samples=4 00:33:32.765 iops : min= 8362, max= 9322, avg=8838.50, stdev=417.73, samples=4 00:33:32.765 lat (msec) : 4=0.03%, 10=97.16%, 20=2.81% 00:33:32.765 cpu : usr=62.16%, sys=26.42%, ctx=35, majf=0, minf=7 00:33:32.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:32.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:32.765 issued rwts: total=17715,17742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:32.765 00:33:32.765 Run status group 0 (all jobs): 00:33:32.765 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.2MiB (72.6MB), run=2007-2007msec 00:33:32.765 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.7MB), run=2007-2007msec 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:32.765 13:17:44 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:32.765 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:32.765 fio-3.35 00:33:32.765 Starting 1 thread 00:33:35.291 00:33:35.291 test: (groupid=0, jobs=1): err= 0: pid=119422: Mon Jul 15 13:17:47 2024 00:33:35.291 read: IOPS=7097, BW=111MiB/s (116MB/s)(223MiB/2008msec) 00:33:35.291 slat (usec): min=3, max=152, avg= 4.36, stdev= 2.32 00:33:35.291 clat (usec): min=1681, max=21503, avg=10945.52, stdev=2931.85 00:33:35.291 lat (usec): min=1684, max=21510, avg=10949.88, stdev=2932.24 00:33:35.291 clat percentiles (usec): 00:33:35.291 | 1.00th=[ 5538], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 8356], 00:33:35.291 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10683], 60.00th=[11469], 00:33:35.291 | 70.00th=[12256], 80.00th=[13173], 90.00th=[15008], 95.00th=[16581], 00:33:35.291 | 99.00th=[18482], 99.50th=[20055], 99.90th=[21365], 99.95th=[21365], 00:33:35.291 | 99.99th=[21627] 00:33:35.291 bw ( KiB/s): min=52672, max=67200, per=50.60%, avg=57464.00, stdev=6582.49, samples=4 00:33:35.291 iops : min= 3292, max= 4200, avg=3591.50, stdev=411.41, samples=4 00:33:35.291 write: IOPS=4143, BW=64.7MiB/s (67.9MB/s)(118MiB/1816msec); 0 zone resets 00:33:35.291 slat (usec): min=37, max=307, avg=41.69, stdev= 6.82 00:33:35.291 clat (usec): min=4000, max=21993, avg=12911.22, stdev=2374.77 00:33:35.291 lat (usec): min=4047, max=22040, avg=12952.90, stdev=2376.00 00:33:35.291 clat percentiles (usec): 00:33:35.291 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10945], 00:33:35.291 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12649], 60.00th=[13173], 00:33:35.291 | 70.00th=[13960], 80.00th=[14877], 90.00th=[16057], 95.00th=[17171], 00:33:35.291 | 99.00th=[19792], 99.50th=[20579], 99.90th=[21365], 99.95th=[21627], 00:33:35.291 | 99.99th=[21890] 00:33:35.291 bw ( KiB/s): min=54624, max=70656, per=90.10%, avg=59736.00, stdev=7354.08, samples=4 00:33:35.291 iops : min= 3414, max= 4416, avg=3733.50, stdev=459.63, samples=4 00:33:35.291 lat (msec) : 2=0.02%, 4=0.15%, 10=28.08%, 20=71.12%, 50=0.63% 00:33:35.291 cpu : usr=65.87%, sys=20.63%, ctx=112, majf=0, minf=16 00:33:35.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:33:35.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:35.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:35.291 issued rwts: total=14252,7525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:35.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:35.291 00:33:35.291 Run status group 0 (all jobs): 00:33:35.291 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=223MiB (234MB), run=2008-2008msec 00:33:35.291 WRITE: bw=64.7MiB/s (67.9MB/s), 64.7MiB/s-64.7MiB/s (67.9MB/s-67.9MB/s), io=118MiB (123MB), run=1816-1816msec 00:33:35.292 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:35.292 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:33:35.292 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:35.292 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:35.292 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:35.292 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@492 -- # nvmfcleanup 00:33:35.292 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.549 rmmod nvme_tcp 00:33:35.549 rmmod nvme_fabrics 00:33:35.549 rmmod nvme_keyring 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@493 -- # '[' -n 119247 ']' 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@494 -- # killprocess 119247 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 119247 ']' 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 119247 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119247 00:33:35.549 killing process with pid 119247 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119247' 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 119247 00:33:35.549 13:17:47 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 119247 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@282 -- # remove_spdk_ns 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:33:35.808 00:33:35.808 real 0m9.170s 00:33:35.808 user 0m27.196s 00:33:35.808 sys 0m4.397s 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.808 ************************************ 00:33:35.808 END TEST nvmf_fio_host 00:33:35.808 ************************************ 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@104 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:35.808 ************************************ 00:33:35.808 START TEST nvmf_failover 00:33:35.808 ************************************ 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:35.808 * Looking for test storage... 00:33:35.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:35.808 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@452 -- # prepare_net_devs 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@414 -- # local -g is_hw=no 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@416 -- # remove_spdk_ns 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@436 -- # nvmf_veth_init 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:33:35.809 Cannot find device "nvmf_tgt_br" 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@159 -- # true 00:33:35.809 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:33:36.067 Cannot find device "nvmf_tgt_br2" 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@160 -- # true 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:33:36.067 Cannot find device "nvmf_tgt_br" 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@162 -- # true 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:33:36.067 Cannot find device "nvmf_tgt_br2" 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@163 -- # true 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:36.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@166 -- # true 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:36.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@167 -- # true 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:36.067 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:33:36.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:33:36.326 00:33:36.326 --- 10.0.0.2 ping statistics --- 00:33:36.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.326 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:33:36.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:36.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:33:36.326 00:33:36.326 --- 10.0.0.3 ping statistics --- 00:33:36.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.326 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:36.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:33:36.326 00:33:36.326 --- 10.0.0.1 ping statistics --- 00:33:36.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.326 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@437 -- # return 0 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@485 -- # nvmfpid=119631 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@486 -- # waitforlisten 119631 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 119631 ']' 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:36.326 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:36.326 [2024-07-15 13:17:48.654530] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:36.326 [2024-07-15 13:17:48.655737] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:36.326 [2024-07-15 13:17:48.655829] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.584 [2024-07-15 13:17:48.794853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:36.584 [2024-07-15 13:17:48.864415] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.584 [2024-07-15 13:17:48.864482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.584 [2024-07-15 13:17:48.864496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.584 [2024-07-15 13:17:48.864506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.584 [2024-07-15 13:17:48.864515] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.584 [2024-07-15 13:17:48.867801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:36.584 [2024-07-15 13:17:48.867917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:36.584 [2024-07-15 13:17:48.867930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.584 [2024-07-15 13:17:48.921957] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:36.584 [2024-07-15 13:17:48.922048] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:36.584 [2024-07-15 13:17:48.922415] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:36.584 [2024-07-15 13:17:48.923218] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:36.584 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:36.584 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:36.584 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:33:36.584 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:36.584 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:36.584 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.584 13:17:48 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:36.841 [2024-07-15 13:17:49.256873] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.841 13:17:49 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:37.407 Malloc0 00:33:37.407 13:17:49 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:37.665 13:17:49 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:37.923 13:17:50 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:38.181 [2024-07-15 13:17:50.496943] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.181 13:17:50 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:38.444 [2024-07-15 13:17:50.768950] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:38.444 13:17:50 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:38.703 [2024-07-15 13:17:51.096922] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=119736 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 119736 /var/tmp/bdevperf.sock 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 119736 ']' 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:38.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:38.703 13:17:51 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:40.075 13:17:52 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:40.075 13:17:52 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:40.075 13:17:52 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:40.075 NVMe0n1 00:33:40.075 13:17:52 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:40.639 00:33:40.639 13:17:52 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=119778 00:33:40.639 13:17:52 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:40.639 13:17:52 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:41.578 13:17:53 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:41.844 [2024-07-15 13:17:54.163740] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163828] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163842] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163853] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163864] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163874] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163883] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163893] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163903] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163913] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163923] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163934] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163944] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163954] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163964] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163974] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163983] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.163993] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164003] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164013] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164023] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164033] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164042] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164052] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164062] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164072] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164082] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164092] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164103] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164113] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164123] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164135] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164145] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164155] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164164] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164174] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164184] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164194] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164205] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164215] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164225] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164235] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164245] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164255] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164265] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164275] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164285] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164295] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164305] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164314] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164324] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164334] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164344] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164354] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164364] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.844 [2024-07-15 13:17:54.164374] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 [2024-07-15 13:17:54.164384] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 [2024-07-15 13:17:54.164395] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 [2024-07-15 13:17:54.164404] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 [2024-07-15 13:17:54.164414] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 [2024-07-15 13:17:54.164424] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 [2024-07-15 13:17:54.164434] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 [2024-07-15 13:17:54.164444] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 [2024-07-15 13:17:54.164454] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 [2024-07-15 13:17:54.164464] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112e580 is same with the state(5) to be set 00:33:41.845 13:17:54 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:45.160 13:17:57 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:45.160 00:33:45.160 13:17:57 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:45.725 [2024-07-15 13:17:57.893071] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.893518] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.893598] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.893654] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.893717] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.893797] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.893859] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.893923] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.893989] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894049] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894110] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894175] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894253] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894327] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894380] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894429] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894477] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894525] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894572] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894620] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894667] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894715] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894786] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894866] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.894940] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895004] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895093] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895158] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895211] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895267] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895319] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895376] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895435] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895486] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895534] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895600] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895679] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895742] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895836] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895893] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.895943] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896005] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896076] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896133] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896203] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896263] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896315] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896364] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896413] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896470] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896530] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896582] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896630] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896699] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896750] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896838] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896895] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.896944] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897014] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897080] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897148] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897199] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897213] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897222] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897230] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897238] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897247] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897255] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897263] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897271] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897279] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897288] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897296] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897304] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897312] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897320] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897328] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897336] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897344] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897353] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897361] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897369] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897377] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897385] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897393] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897401] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897409] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 [2024-07-15 13:17:57.897419] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3de0 is same with the state(5) to be set 00:33:45.725 13:17:57 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:49.006 13:18:00 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:49.006 [2024-07-15 13:18:01.181801] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.006 13:18:01 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:49.940 13:18:02 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:50.200 [2024-07-15 13:18:02.555375] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.555921] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556030] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556098] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556192] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556274] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556351] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556411] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556471] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556552] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556634] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556715] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556807] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556892] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.556955] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557030] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557105] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557180] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557254] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557329] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557404] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557479] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557559] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557634] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557709] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557803] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557881] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.557943] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558018] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558098] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558173] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558253] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558328] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558402] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558478] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558552] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558614] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558689] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558780] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558879] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.558966] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559049] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559125] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559206] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559281] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559357] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559431] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559498] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559574] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559671] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559742] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559839] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.559985] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560073] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560134] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560216] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560292] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560367] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560441] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560516] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560591] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560671] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560749] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560841] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560922] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.560997] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.200 [2024-07-15 13:18:02.561072] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.561164] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.561232] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.561306] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.561381] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.561460] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.561536] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562231] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562337] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562406] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562486] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562566] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562642] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562722] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562802] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562872] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562932] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.562991] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563051] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563114] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563190] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563255] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563346] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563413] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563487] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563562] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563652] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 [2024-07-15 13:18:02.563729] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4150 is same with the state(5) to be set 00:33:50.201 13:18:02 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@59 -- # wait 119778 00:33:56.757 0 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@61 -- # killprocess 119736 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 119736 ']' 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 119736 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119736 00:33:56.757 killing process with pid 119736 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119736' 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@967 -- # kill 119736 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@972 -- # wait 119736 00:33:56.757 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:56.757 [2024-07-15 13:17:51.171317] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:33:56.757 [2024-07-15 13:17:51.171443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119736 ] 00:33:56.757 [2024-07-15 13:17:51.310943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.757 [2024-07-15 13:17:51.381215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.757 Running I/O for 15 seconds... 00:33:56.757 [2024-07-15 13:17:54.164687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.757 [2024-07-15 13:17:54.164751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.757 [2024-07-15 13:17:54.164797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.757 [2024-07-15 13:17:54.164816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.757 [2024-07-15 13:17:54.164832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.757 [2024-07-15 13:17:54.164846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.757 [2024-07-15 13:17:54.164862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.164876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.164892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.164906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.164923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.164936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.164952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.164966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.164994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.758 [2024-07-15 13:17:54.165844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.758 [2024-07-15 13:17:54.165859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.165873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.165889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.165902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.165926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.165940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.165956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.165970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.165985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.165998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.759 [2024-07-15 13:17:54.166624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.759 [2024-07-15 13:17:54.166654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.759 [2024-07-15 13:17:54.166690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.759 [2024-07-15 13:17:54.166722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.759 [2024-07-15 13:17:54.166751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.759 [2024-07-15 13:17:54.166792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.759 [2024-07-15 13:17:54.166823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.759 [2024-07-15 13:17:54.166853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.759 [2024-07-15 13:17:54.166869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.760 [2024-07-15 13:17:54.166882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.166897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.760 [2024-07-15 13:17:54.166911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.166926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.760 [2024-07-15 13:17:54.166940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.166955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.760 [2024-07-15 13:17:54.166968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.166983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.760 [2024-07-15 13:17:54.166997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.760 [2024-07-15 13:17:54.167026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.760 [2024-07-15 13:17:54.167055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.760 [2024-07-15 13:17:54.167092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.760 [2024-07-15 13:17:54.167120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.760 [2024-07-15 13:17:54.167942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.760 [2024-07-15 13:17:54.167955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.167970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.167984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.167999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.761 [2024-07-15 13:17:54.168427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.761 [2024-07-15 13:17:54.168725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461c90 is same with the state(5) to be set 00:33:56.761 [2024-07-15 13:17:54.168760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.761 [2024-07-15 13:17:54.168783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.761 [2024-07-15 13:17:54.168795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80528 len:8 PRP1 0x0 PRP2 0x0 00:33:56.761 [2024-07-15 13:17:54.168807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168870] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2461c90 was disconnected and freed. reset controller. 00:33:56.761 [2024-07-15 13:17:54.168890] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:56.761 [2024-07-15 13:17:54.168956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.761 [2024-07-15 13:17:54.168977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.168993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.761 [2024-07-15 13:17:54.169006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.169019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.761 [2024-07-15 13:17:54.169032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.169047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.761 [2024-07-15 13:17:54.169059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.761 [2024-07-15 13:17:54.169072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.761 [2024-07-15 13:17:54.169115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e5e30 (9): Bad file descriptor 00:33:56.761 [2024-07-15 13:17:54.173191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.761 [2024-07-15 13:17:54.203012] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:56.762 [2024-07-15 13:17:57.897558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.897971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.897987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.762 [2024-07-15 13:17:57.898586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.762 [2024-07-15 13:17:57.898600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.898977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.898990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.763 [2024-07-15 13:17:57.899508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.763 [2024-07-15 13:17:57.899522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.899974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.899988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.764 [2024-07-15 13:17:57.900382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.764 [2024-07-15 13:17:57.900739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.764 [2024-07-15 13:17:57.900752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.900780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.765 [2024-07-15 13:17:57.900797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.900813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.765 [2024-07-15 13:17:57.900827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.900843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.765 [2024-07-15 13:17:57.900857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.900872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.765 [2024-07-15 13:17:57.900886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.900902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.900915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.900930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.900945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.900961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.900983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.900999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.765 [2024-07-15 13:17:57.901726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463d90 is same with the state(5) to be set 00:33:56.765 [2024-07-15 13:17:57.901786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.765 [2024-07-15 13:17:57.901801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.765 [2024-07-15 13:17:57.901812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92624 len:8 PRP1 0x0 PRP2 0x0 00:33:56.765 [2024-07-15 13:17:57.901826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.901886] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2463d90 was disconnected and freed. reset controller. 00:33:56.765 [2024-07-15 13:17:57.901909] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:56.765 [2024-07-15 13:17:57.902015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.765 [2024-07-15 13:17:57.902042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.902058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.765 [2024-07-15 13:17:57.902071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.902084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.765 [2024-07-15 13:17:57.902098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.765 [2024-07-15 13:17:57.902112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.766 [2024-07-15 13:17:57.902125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:17:57.902138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.766 [2024-07-15 13:17:57.902188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e5e30 (9): Bad file descriptor 00:33:56.766 [2024-07-15 13:17:57.906268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.766 [2024-07-15 13:17:57.936968] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:56.766 [2024-07-15 13:18:02.561543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.766 [2024-07-15 13:18:02.561590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.561608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.766 [2024-07-15 13:18:02.561623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.561637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.766 [2024-07-15 13:18:02.561650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.561664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.766 [2024-07-15 13:18:02.561676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.561689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e5e30 is same with the state(5) to be set 00:33:56.766 [2024-07-15 13:18:02.564089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.564973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.564988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.565003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.565019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.766 [2024-07-15 13:18:02.565033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.766 [2024-07-15 13:18:02.565049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.767 [2024-07-15 13:18:02.565063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.767 [2024-07-15 13:18:02.565092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.767 [2024-07-15 13:18:02.565122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.767 [2024-07-15 13:18:02.565150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.767 [2024-07-15 13:18:02.565180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.767 [2024-07-15 13:18:02.565209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.767 [2024-07-15 13:18:02.565238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.767 [2024-07-15 13:18:02.565267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.767 [2024-07-15 13:18:02.565303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.565970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.565986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.566000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.566015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.767 [2024-07-15 13:18:02.566029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.767 [2024-07-15 13:18:02.566044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.768 [2024-07-15 13:18:02.566300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.566972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.566987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.567000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.567016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.567029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.567045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.567058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.567074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.567087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.768 [2024-07-15 13:18:02.567103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.768 [2024-07-15 13:18:02.567117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.769 [2024-07-15 13:18:02.567507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.567964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.567979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.769 [2024-07-15 13:18:02.568000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.568016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463b80 is same with the state(5) to be set 00:33:56.769 [2024-07-15 13:18:02.568033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.769 [2024-07-15 13:18:02.568043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.769 [2024-07-15 13:18:02.568054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42472 len:8 PRP1 0x0 PRP2 0x0 00:33:56.769 [2024-07-15 13:18:02.568067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.769 [2024-07-15 13:18:02.568119] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2463b80 was disconnected and freed. reset controller. 00:33:56.769 [2024-07-15 13:18:02.568138] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:56.769 [2024-07-15 13:18:02.568153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.769 [2024-07-15 13:18:02.572192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.769 [2024-07-15 13:18:02.572266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e5e30 (9): Bad file descriptor 00:33:56.769 [2024-07-15 13:18:02.608328] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:56.769 00:33:56.769 Latency(us) 00:33:56.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.769 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:56.769 Verification LBA range: start 0x0 length 0x4000 00:33:56.769 NVMe0n1 : 15.01 8834.83 34.51 171.95 0.00 14178.68 670.25 21090.68 00:33:56.769 =================================================================================================================== 00:33:56.769 Total : 8834.83 34.51 171.95 0.00 14178.68 670.25 21090.68 00:33:56.769 Received shutdown signal, test time was about 15.000000 seconds 00:33:56.769 00:33:56.769 Latency(us) 00:33:56.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.769 =================================================================================================================== 00:33:56.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.769 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:56.769 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:56.769 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:56.769 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=119966 00:33:56.769 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 119966 /var/tmp/bdevperf.sock 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 119966 ']' 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:56.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:56.770 [2024-07-15 13:18:08.816910] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:56.770 13:18:08 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:56.770 [2024-07-15 13:18:09.064870] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:56.770 13:18:09 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:57.026 NVMe0n1 00:33:57.026 13:18:09 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:57.282 00:33:57.282 13:18:09 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:57.846 00:33:57.846 13:18:10 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:57.846 13:18:10 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:58.104 13:18:10 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:58.361 13:18:10 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:01.649 13:18:13 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:01.649 13:18:13 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:01.649 13:18:13 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=120092 00:34:01.649 13:18:13 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:01.649 13:18:13 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@92 -- # wait 120092 00:34:03.020 0 00:34:03.020 13:18:15 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:03.020 [2024-07-15 13:18:08.279129] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:34:03.020 [2024-07-15 13:18:08.279951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119966 ] 00:34:03.020 [2024-07-15 13:18:08.420305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.020 [2024-07-15 13:18:08.489270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.020 [2024-07-15 13:18:10.614561] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:03.020 [2024-07-15 13:18:10.615099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.020 [2024-07-15 13:18:10.615231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.020 [2024-07-15 13:18:10.615328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.020 [2024-07-15 13:18:10.615408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.020 [2024-07-15 13:18:10.615480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.020 [2024-07-15 13:18:10.615560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.020 [2024-07-15 13:18:10.615652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.020 [2024-07-15 13:18:10.615741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.021 [2024-07-15 13:18:10.615837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.021 [2024-07-15 13:18:10.615999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.021 [2024-07-15 13:18:10.616111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32e30 (9): Bad file descriptor 00:34:03.021 [2024-07-15 13:18:10.618898] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:03.021 Running I/O for 1 seconds... 00:34:03.021 00:34:03.021 Latency(us) 00:34:03.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.021 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:03.021 Verification LBA range: start 0x0 length 0x4000 00:34:03.021 NVMe0n1 : 1.01 8756.24 34.20 0.00 0.00 14547.75 2353.34 19303.33 00:34:03.021 =================================================================================================================== 00:34:03.021 Total : 8756.24 34.20 0.00 0.00 14547.75 2353.34 19303.33 00:34:03.021 13:18:15 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:03.021 13:18:15 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:03.021 13:18:15 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:03.278 13:18:15 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:03.278 13:18:15 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:03.536 13:18:15 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:03.794 13:18:16 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:07.076 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:07.076 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@108 -- # killprocess 119966 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 119966 ']' 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 119966 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119966 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119966' 00:34:07.359 killing process with pid 119966 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@967 -- # kill 119966 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@972 -- # wait 119966 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:07.359 13:18:19 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@492 -- # nvmfcleanup 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:07.924 rmmod nvme_tcp 00:34:07.924 rmmod nvme_fabrics 00:34:07.924 rmmod nvme_keyring 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@493 -- # '[' -n 119631 ']' 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@494 -- # killprocess 119631 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 119631 ']' 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 119631 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119631 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:07.924 killing process with pid 119631 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119631' 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@967 -- # kill 119631 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@972 -- # wait 119631 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@282 -- # remove_spdk_ns 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:07.924 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.182 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:34:08.182 00:34:08.182 real 0m32.302s 00:34:08.182 user 1m55.537s 00:34:08.182 sys 0m11.722s 00:34:08.182 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:08.182 13:18:20 nvmf_tcp_interrupt_mode.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:08.182 ************************************ 00:34:08.182 END TEST nvmf_failover 00:34:08.182 ************************************ 00:34:08.182 13:18:20 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:34:08.182 13:18:20 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@105 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:08.182 13:18:20 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:08.182 13:18:20 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:08.182 13:18:20 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:08.182 ************************************ 00:34:08.182 START TEST nvmf_host_discovery 00:34:08.182 ************************************ 00:34:08.182 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:08.182 * Looking for test storage... 00:34:08.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@452 -- # prepare_net_devs 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@414 -- # local -g is_hw=no 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@416 -- # remove_spdk_ns 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@436 -- # nvmf_veth_init 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:34:08.183 Cannot find device "nvmf_tgt_br" 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:34:08.183 Cannot find device "nvmf_tgt_br2" 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@160 -- # true 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:34:08.183 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:34:08.441 Cannot find device "nvmf_tgt_br" 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:34:08.441 Cannot find device "nvmf_tgt_br2" 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:08.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:08.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:34:08.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:34:08.441 00:34:08.441 --- 10.0.0.2 ping statistics --- 00:34:08.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.441 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:34:08.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:08.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:34:08.441 00:34:08.441 --- 10.0.0.3 ping statistics --- 00:34:08.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.441 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:08.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:34:08.441 00:34:08.441 --- 10.0.0.1 ping statistics --- 00:34:08.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.441 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@437 -- # return 0 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:34:08.441 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:34:08.442 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.442 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:34:08.442 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@485 -- # nvmfpid=120397 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@486 -- # waitforlisten 120397 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 120397 ']' 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:08.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:08.699 13:18:20 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.699 [2024-07-15 13:18:20.972546] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:08.699 [2024-07-15 13:18:20.973645] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:34:08.699 [2024-07-15 13:18:20.973712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.699 [2024-07-15 13:18:21.109124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.957 [2024-07-15 13:18:21.171776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.957 [2024-07-15 13:18:21.171835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.957 [2024-07-15 13:18:21.171847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.957 [2024-07-15 13:18:21.171856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.957 [2024-07-15 13:18:21.171863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.957 [2024-07-15 13:18:21.171910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.958 [2024-07-15 13:18:21.221844] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:08.958 [2024-07-15 13:18:21.222228] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.958 [2024-07-15 13:18:21.308827] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.958 [2024-07-15 13:18:21.320650] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.958 null0 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.958 null1 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=120432 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 120432 /tmp/host.sock 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 120432 ']' 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:08.958 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:08.958 13:18:21 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.958 [2024-07-15 13:18:21.401490] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:34:08.958 [2024-07-15 13:18:21.401578] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120432 ] 00:34:09.216 [2024-07-15 13:18:21.538472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.216 [2024-07-15 13:18:21.631928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.149 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 [2024-07-15 13:18:22.776589] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:10.407 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.665 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:10.666 13:18:22 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.666 13:18:23 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:34:10.666 13:18:23 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:34:11.231 [2024-07-15 13:18:23.396929] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:11.231 [2024-07-15 13:18:23.396973] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:11.231 [2024-07-15 13:18:23.397041] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:11.231 [2024-07-15 13:18:23.483099] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:11.231 [2024-07-15 13:18:23.540307] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:11.231 [2024-07-15 13:18:23.540377] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:11.797 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.798 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 [2024-07-15 13:18:24.396585] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:12.057 [2024-07-15 13:18:24.397706] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:12.057 [2024-07-15 13:18:24.397749] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:12.057 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.058 [2024-07-15 13:18:24.484774] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:12.058 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:12.316 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.316 [2024-07-15 13:18:24.543112] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:12.316 [2024-07-15 13:18:24.543161] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:12.316 [2024-07-15 13:18:24.543177] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:12.316 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:12.316 13:18:24 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:34:13.294 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.295 [2024-07-15 13:18:25.701168] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:13.295 [2024-07-15 13:18:25.701218] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:13.295 [2024-07-15 13:18:25.707722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.295 [2024-07-15 13:18:25.707762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.295 [2024-07-15 13:18:25.707786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.295 [2024-07-15 13:18:25.707796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.295 [2024-07-15 13:18:25.707806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.295 [2024-07-15 13:18:25.707815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.295 [2024-07-15 13:18:25.707824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.295 [2024-07-15 13:18:25.707834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.295 [2024-07-15 13:18:25.707843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57c70 is same with the state(5) to be set 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:13.295 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.295 [2024-07-15 13:18:25.717669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc57c70 (9): Bad file descriptor 00:34:13.559 [2024-07-15 13:18:25.727689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:13.559 [2024-07-15 13:18:25.727851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.559 [2024-07-15 13:18:25.727882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc57c70 with addr=10.0.0.2, port=4420 00:34:13.559 [2024-07-15 13:18:25.727895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57c70 is same with the state(5) to be set 00:34:13.559 [2024-07-15 13:18:25.727914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc57c70 (9): Bad file descriptor 00:34:13.559 [2024-07-15 13:18:25.727929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:13.559 [2024-07-15 13:18:25.727939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:13.559 [2024-07-15 13:18:25.727950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:13.559 [2024-07-15 13:18:25.727966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.559 [2024-07-15 13:18:25.737759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:13.559 [2024-07-15 13:18:25.737853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.559 [2024-07-15 13:18:25.737876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc57c70 with addr=10.0.0.2, port=4420 00:34:13.559 [2024-07-15 13:18:25.737887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57c70 is same with the state(5) to be set 00:34:13.559 [2024-07-15 13:18:25.737904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc57c70 (9): Bad file descriptor 00:34:13.559 [2024-07-15 13:18:25.737929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:13.559 [2024-07-15 13:18:25.737941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:13.559 [2024-07-15 13:18:25.737950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:13.559 [2024-07-15 13:18:25.737966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.559 [2024-07-15 13:18:25.747819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:13.559 [2024-07-15 13:18:25.747910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.559 [2024-07-15 13:18:25.747932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc57c70 with addr=10.0.0.2, port=4420 00:34:13.559 [2024-07-15 13:18:25.747943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57c70 is same with the state(5) to be set 00:34:13.559 [2024-07-15 13:18:25.747960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc57c70 (9): Bad file descriptor 00:34:13.559 [2024-07-15 13:18:25.747974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:13.559 [2024-07-15 13:18:25.747983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:13.559 [2024-07-15 13:18:25.747992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:13.559 [2024-07-15 13:18:25.748007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.559 [2024-07-15 13:18:25.757874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:13.559 [2024-07-15 13:18:25.757971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.559 [2024-07-15 13:18:25.757993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc57c70 with addr=10.0.0.2, port=4420 00:34:13.559 [2024-07-15 13:18:25.758004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57c70 is same with the state(5) to be set 00:34:13.559 [2024-07-15 13:18:25.758032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc57c70 (9): Bad file descriptor 00:34:13.559 [2024-07-15 13:18:25.758048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:13.559 [2024-07-15 13:18:25.758057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:13.559 [2024-07-15 13:18:25.758067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:13.559 [2024-07-15 13:18:25.758082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.559 [2024-07-15 13:18:25.767927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:13.559 [2024-07-15 13:18:25.768011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.559 [2024-07-15 13:18:25.768031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc57c70 with addr=10.0.0.2, port=4420 00:34:13.559 [2024-07-15 13:18:25.768042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57c70 is same with the state(5) to be set 00:34:13.559 [2024-07-15 13:18:25.768058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc57c70 (9): Bad file descriptor 00:34:13.559 [2024-07-15 13:18:25.768072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:13.559 [2024-07-15 13:18:25.768081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:13.559 [2024-07-15 13:18:25.768090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:13.559 [2024-07-15 13:18:25.768105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:13.559 [2024-07-15 13:18:25.777985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:13.559 [2024-07-15 13:18:25.778111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.559 [2024-07-15 13:18:25.778133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc57c70 with addr=10.0.0.2, port=4420 00:34:13.559 [2024-07-15 13:18:25.778145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc57c70 is same with the state(5) to be set 00:34:13.559 [2024-07-15 13:18:25.778164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc57c70 (9): Bad file descriptor 00:34:13.559 [2024-07-15 13:18:25.778180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:13.559 [2024-07-15 13:18:25.778189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:13.559 [2024-07-15 13:18:25.778199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:13.559 [2024-07-15 13:18:25.778215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.559 [2024-07-15 13:18:25.787838] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:13.559 [2024-07-15 13:18:25.787874] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.559 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:13.560 13:18:25 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.560 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.818 13:18:26 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.751 [2024-07-15 13:18:27.137018] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:14.751 [2024-07-15 13:18:27.137062] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:14.751 [2024-07-15 13:18:27.137082] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:15.010 [2024-07-15 13:18:27.223148] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:15.010 [2024-07-15 13:18:27.283453] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:15.010 [2024-07-15 13:18:27.283522] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.010 2024/07/15 13:18:27 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:34:15.010 request: 00:34:15.010 { 00:34:15.010 "method": "bdev_nvme_start_discovery", 00:34:15.010 "params": { 00:34:15.010 "name": "nvme", 00:34:15.010 "trtype": "tcp", 00:34:15.010 "traddr": "10.0.0.2", 00:34:15.010 "adrfam": "ipv4", 00:34:15.010 "trsvcid": "8009", 00:34:15.010 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:15.010 "wait_for_attach": true 00:34:15.010 } 00:34:15.010 } 00:34:15.010 Got JSON-RPC error response 00:34:15.010 GoRPCClient: error on JSON-RPC call 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.010 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.011 2024/07/15 13:18:27 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:34:15.011 request: 00:34:15.011 { 00:34:15.011 "method": "bdev_nvme_start_discovery", 00:34:15.011 "params": { 00:34:15.011 "name": "nvme_second", 00:34:15.011 "trtype": "tcp", 00:34:15.011 "traddr": "10.0.0.2", 00:34:15.011 "adrfam": "ipv4", 00:34:15.011 "trsvcid": "8009", 00:34:15.011 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:15.011 "wait_for_attach": true 00:34:15.011 } 00:34:15.011 } 00:34:15.011 Got JSON-RPC error response 00:34:15.011 GoRPCClient: error on JSON-RPC call 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.011 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.269 13:18:27 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:16.201 [2024-07-15 13:18:28.564525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.201 [2024-07-15 13:18:28.564615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc70000 with addr=10.0.0.2, port=8010 00:34:16.201 [2024-07-15 13:18:28.564639] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:16.201 [2024-07-15 13:18:28.564651] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:16.201 [2024-07-15 13:18:28.564661] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:17.133 [2024-07-15 13:18:29.564495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.133 [2024-07-15 13:18:29.564571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc70000 with addr=10.0.0.2, port=8010 00:34:17.133 [2024-07-15 13:18:29.564594] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:17.133 [2024-07-15 13:18:29.564605] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:17.133 [2024-07-15 13:18:29.564615] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:18.506 [2024-07-15 13:18:30.564328] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:18.506 2024/07/15 13:18:30 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:34:18.506 request: 00:34:18.506 { 00:34:18.506 "method": "bdev_nvme_start_discovery", 00:34:18.506 "params": { 00:34:18.506 "name": "nvme_second", 00:34:18.506 "trtype": "tcp", 00:34:18.506 "traddr": "10.0.0.2", 00:34:18.506 "adrfam": "ipv4", 00:34:18.506 "trsvcid": "8010", 00:34:18.506 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:18.506 "wait_for_attach": false, 00:34:18.506 "attach_timeout_ms": 3000 00:34:18.506 } 00:34:18.506 } 00:34:18.506 Got JSON-RPC error response 00:34:18.506 GoRPCClient: error on JSON-RPC call 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:18.506 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 120432 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@492 -- # nvmfcleanup 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:18.507 rmmod nvme_tcp 00:34:18.507 rmmod nvme_fabrics 00:34:18.507 rmmod nvme_keyring 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@493 -- # '[' -n 120397 ']' 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@494 -- # killprocess 120397 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 120397 ']' 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 120397 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120397 00:34:18.507 killing process with pid 120397 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120397' 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 120397 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 120397 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@282 -- # remove_spdk_ns 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:34:18.507 00:34:18.507 real 0m10.486s 00:34:18.507 user 0m18.559s 00:34:18.507 sys 0m2.963s 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:18.507 ************************************ 00:34:18.507 13:18:30 nvmf_tcp_interrupt_mode.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.507 END TEST nvmf_host_discovery 00:34:18.507 ************************************ 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@106 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:18.766 ************************************ 00:34:18.766 START TEST nvmf_host_multipath_status 00:34:18.766 ************************************ 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:18.766 * Looking for test storage... 00:34:18.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:34:18.766 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # prepare_net_devs 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # local -g is_hw=no 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # remove_spdk_ns 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # nvmf_veth_init 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:34:18.767 Cannot find device "nvmf_tgt_br" 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:34:18.767 Cannot find device "nvmf_tgt_br2" 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # true 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:34:18.767 Cannot find device "nvmf_tgt_br" 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:34:18.767 Cannot find device "nvmf_tgt_br2" 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:34:18.767 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:19.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:19.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:19.025 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:19.284 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:19.284 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:19.284 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:34:19.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:34:19.285 00:34:19.285 --- 10.0.0.2 ping statistics --- 00:34:19.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.285 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:34:19.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:19.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:34:19.285 00:34:19.285 --- 10.0.0.3 ping statistics --- 00:34:19.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.285 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:19.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:34:19.285 00:34:19.285 --- 10.0.0.1 ping statistics --- 00:34:19.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.285 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@437 -- # return 0 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:19.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # nvmfpid=120907 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # waitforlisten 120907 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 120907 ']' 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:19.285 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:19.285 [2024-07-15 13:18:31.616932] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:19.285 [2024-07-15 13:18:31.618473] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:34:19.285 [2024-07-15 13:18:31.618549] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.543 [2024-07-15 13:18:31.762501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:19.543 [2024-07-15 13:18:31.840394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.543 [2024-07-15 13:18:31.840677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.543 [2024-07-15 13:18:31.840942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.543 [2024-07-15 13:18:31.841178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.543 [2024-07-15 13:18:31.841322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.543 [2024-07-15 13:18:31.841557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.543 [2024-07-15 13:18:31.841584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.543 [2024-07-15 13:18:31.890160] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:19.543 [2024-07-15 13:18:31.890374] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:19.543 [2024-07-15 13:18:31.890718] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:19.543 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:19.543 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:19.543 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:34:19.543 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:19.543 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:19.543 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.543 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=120907 00:34:19.543 13:18:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:19.802 [2024-07-15 13:18:32.214639] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.802 13:18:32 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:20.367 Malloc0 00:34:20.367 13:18:32 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:20.623 13:18:32 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:20.880 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.138 [2024-07-15 13:18:33.594708] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.396 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:21.653 [2024-07-15 13:18:33.898672] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:21.653 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=121003 00:34:21.653 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:21.653 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:21.653 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 121003 /var/tmp/bdevperf.sock 00:34:21.654 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 121003 ']' 00:34:21.654 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:21.654 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:21.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:21.654 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:21.654 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:21.654 13:18:33 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:22.586 13:18:35 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:22.586 13:18:35 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:22.586 13:18:35 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:23.152 13:18:35 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:23.409 Nvme0n1 00:34:23.409 13:18:35 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:23.667 Nvme0n1 00:34:23.667 13:18:36 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:23.667 13:18:36 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:26.237 13:18:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:26.237 13:18:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:26.238 13:18:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:26.497 13:18:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:27.433 13:18:39 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:27.433 13:18:39 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:27.433 13:18:39 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.433 13:18:39 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:27.690 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.690 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:27.690 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.690 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:27.950 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.950 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:27.950 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:27.950 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.208 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.208 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:28.466 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.466 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:28.724 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.724 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:28.724 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.724 13:18:40 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:28.982 13:18:41 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.982 13:18:41 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:28.982 13:18:41 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:28.982 13:18:41 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.238 13:18:41 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.238 13:18:41 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:29.238 13:18:41 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:29.495 13:18:41 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:29.761 13:18:41 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:30.696 13:18:42 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:30.696 13:18:42 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:30.696 13:18:42 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:30.696 13:18:42 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.954 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:30.954 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:30.954 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.954 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:31.212 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.212 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:31.212 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:31.212 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.469 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.469 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:31.469 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.469 13:18:43 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:31.728 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.728 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:31.728 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.728 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:31.987 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.987 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:31.987 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.987 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:32.246 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.246 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:32.246 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:32.504 13:18:44 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:32.761 13:18:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:33.695 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:33.695 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:33.695 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.695 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:34.260 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.260 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:34.260 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.260 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:34.538 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:34.538 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:34.538 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.538 13:18:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:34.797 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.797 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:34.797 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.797 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:35.055 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.055 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:35.055 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.055 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:35.313 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.313 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:35.313 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.313 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:35.572 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.572 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:35.572 13:18:47 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:36.139 13:18:48 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:36.139 13:18:48 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:37.511 13:18:49 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:37.511 13:18:49 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:37.511 13:18:49 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.511 13:18:49 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:37.511 13:18:49 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.511 13:18:49 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:37.511 13:18:49 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.511 13:18:49 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:37.769 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:37.769 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:37.769 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.769 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:38.028 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.028 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:38.028 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.028 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:38.286 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.286 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:38.286 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.286 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:38.544 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.544 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:38.544 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.544 13:18:50 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:39.112 13:18:51 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:39.112 13:18:51 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:39.112 13:18:51 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:39.397 13:18:51 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:39.654 13:18:51 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:40.586 13:18:52 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:40.586 13:18:52 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:40.586 13:18:52 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.586 13:18:52 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:40.844 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.844 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:40.844 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.844 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:41.101 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:41.101 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:41.101 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.101 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:41.359 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.359 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:41.359 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.359 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:41.616 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.616 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:41.616 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.616 13:18:53 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:41.873 13:18:54 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:41.873 13:18:54 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:41.873 13:18:54 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:41.873 13:18:54 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.129 13:18:54 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:42.129 13:18:54 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:42.129 13:18:54 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:42.386 13:18:54 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:42.644 13:18:55 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:43.576 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:43.576 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:43.576 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.576 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:44.141 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:44.141 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:44.141 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.141 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:44.141 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.141 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:44.141 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:44.141 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.706 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.706 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:44.706 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.706 13:18:56 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:44.975 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.976 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:44.976 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.976 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:45.248 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:45.248 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:45.248 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.248 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:45.520 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.520 13:18:57 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:45.778 13:18:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:45.778 13:18:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:46.036 13:18:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:46.294 13:18:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:47.228 13:18:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:47.228 13:18:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:47.228 13:18:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.228 13:18:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:47.486 13:18:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.486 13:18:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:47.486 13:18:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.486 13:18:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:47.744 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.744 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:47.744 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.744 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:48.001 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.001 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:48.001 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.001 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:48.259 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.259 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:48.259 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.259 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:48.517 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.517 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:48.517 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.517 13:19:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:48.774 13:19:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.774 13:19:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:48.774 13:19:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:49.031 13:19:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:49.288 13:19:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:50.221 13:19:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:50.221 13:19:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:50.221 13:19:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.221 13:19:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:50.786 13:19:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:50.786 13:19:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:50.786 13:19:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.786 13:19:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:50.786 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.786 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:50.786 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.786 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:51.043 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.043 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:51.043 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.043 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:51.301 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.301 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:51.302 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:51.302 13:19:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.870 13:19:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.870 13:19:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:51.870 13:19:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.870 13:19:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:51.870 13:19:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.870 13:19:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:51.870 13:19:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:52.447 13:19:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:52.447 13:19:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:53.821 13:19:05 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:53.821 13:19:05 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:53.821 13:19:05 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.821 13:19:05 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:53.821 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.821 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:53.821 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:53.821 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.080 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.080 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:54.338 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.338 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:54.338 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.338 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:54.596 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:54.596 13:19:06 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.854 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.854 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:54.854 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.854 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.111 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.111 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:55.111 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:55.111 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.369 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.369 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:55.369 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:55.626 13:19:07 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:55.884 13:19:08 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:56.819 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:56.819 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:56.819 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:56.819 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.129 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.129 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:57.129 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.129 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:57.388 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:57.388 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:57.388 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.388 13:19:09 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:57.645 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.646 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:57.646 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.646 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:58.211 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.211 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:58.211 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.211 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:58.211 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.211 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:58.211 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.211 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:58.470 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.470 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 121003 00:34:58.470 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 121003 ']' 00:34:58.470 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 121003 00:34:58.470 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:58.470 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:58.470 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121003 00:34:58.741 killing process with pid 121003 00:34:58.741 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:58.741 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:58.741 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121003' 00:34:58.741 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 121003 00:34:58.741 13:19:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 121003 00:34:58.741 Connection closed with partial response: 00:34:58.741 00:34:58.741 00:34:58.741 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 121003 00:34:58.741 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:58.741 [2024-07-15 13:18:33.976136] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:34:58.741 [2024-07-15 13:18:33.976260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121003 ] 00:34:58.741 [2024-07-15 13:18:34.111029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.741 [2024-07-15 13:18:34.203341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:58.741 Running I/O for 90 seconds... 00:34:58.741 [2024-07-15 13:18:51.588291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.741 [2024-07-15 13:18:51.588377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.588975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.588997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.589583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.589598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:58.741 [2024-07-15 13:18:51.591866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.741 [2024-07-15 13:18:51.591895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.591929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.591946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.591974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.591992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.592969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.592998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.593013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.593057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.593100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.593153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.742 [2024-07-15 13:18:51.593199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.742 [2024-07-15 13:18:51.593242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.742 [2024-07-15 13:18:51.593286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.742 [2024-07-15 13:18:51.593329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.742 [2024-07-15 13:18:51.593373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.742 [2024-07-15 13:18:51.593418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.742 [2024-07-15 13:18:51.593462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.742 [2024-07-15 13:18:51.593638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.593690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.593737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.593801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:18:51.593834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:18:51.593849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:19:08.146341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.742 [2024-07-15 13:19:08.146408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.742 [2024-07-15 13:19:08.146445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.146464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.146485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.146501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.146522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.146536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.146558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.146573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.146594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.146608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.146629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.146644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.146666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.146681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.148969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.148990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.743 [2024-07-15 13:19:08.149415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:58.743 [2024-07-15 13:19:08.149437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.149452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.744 [2024-07-15 13:19:08.149488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.744 [2024-07-15 13:19:08.149525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.744 [2024-07-15 13:19:08.149560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.744 [2024-07-15 13:19:08.149596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.149633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.149669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.149705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.149752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.149808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.149858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.149923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.149967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.149989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.150004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.150028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.150044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.150066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.150081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.152487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.152534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.152571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.152608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.152661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.152698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.152734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.744 [2024-07-15 13:19:08.152786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.744 [2024-07-15 13:19:08.152830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.744 [2024-07-15 13:19:08.152887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.744 [2024-07-15 13:19:08.152924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.744 [2024-07-15 13:19:08.152961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.152982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.152997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.744 [2024-07-15 13:19:08.153409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:58.744 [2024-07-15 13:19:08.153430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.153742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.153795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.153947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.153962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.745 [2024-07-15 13:19:08.155187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.745 [2024-07-15 13:19:08.155868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:58.745 [2024-07-15 13:19:08.155891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.155907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.155929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.155944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.155965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.155980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.156205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.156964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.156979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.157015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.157051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.157195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.157232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.157268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.157675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.157734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.157749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.158243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.158271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.158298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.158314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.158336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.158352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.158374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.746 [2024-07-15 13:19:08.158389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.158411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.746 [2024-07-15 13:19:08.158426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.746 [2024-07-15 13:19:08.158448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.158462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.158980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.158996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.159032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.159068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.159104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.159140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.159177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.159759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.159821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.159860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.159912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.159949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.159972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.159987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.160008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.160023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.160045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.160060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.160081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.160096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.160117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.160132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.160153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.160168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.160189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.160204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.160225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.160254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.161935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.161964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.161991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.162007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.162030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.747 [2024-07-15 13:19:08.162045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.162067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.162082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.162103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.162118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.162139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.162154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.162176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.162191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.162212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.747 [2024-07-15 13:19:08.162227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:58.747 [2024-07-15 13:19:08.162250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.162265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.162301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.162338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.162386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.162426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.162464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.162500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.162536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.162573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.162609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.162630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.162646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.164294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.164339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.164377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.164413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.164450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.164503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.164540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.164577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.164969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.164990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.165005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.165026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.165041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.165062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.165077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.165098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.165114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.165135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.165150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.165171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.165186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.165207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.165222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.165243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.748 [2024-07-15 13:19:08.165258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.165279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.748 [2024-07-15 13:19:08.165294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:58.748 [2024-07-15 13:19:08.165315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.165330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.165351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.165366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.165388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.165410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.166010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.166038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.166064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.166080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.166102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.166118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.166140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.166156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.166177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.166192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.166214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.166229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.166250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.166265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.166286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.166302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.166324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.166340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.169276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.169312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.169349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.169385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.169420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.169565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.749 [2024-07-15 13:19:08.169780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.169820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.169867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.749 [2024-07-15 13:19:08.169924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:58.749 [2024-07-15 13:19:08.169947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.169963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.171474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.171538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.171577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.171616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.171697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.171743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.171798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.171836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.171873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.171909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.171946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.171966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.171981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.172528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.172544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.173858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.173897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.173927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.173962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.173987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.174014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.174051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.174088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.174124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.174161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.174198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.174235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.174272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.174309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.174345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.750 [2024-07-15 13:19:08.174382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:58.750 [2024-07-15 13:19:08.174403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.750 [2024-07-15 13:19:08.174419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.174465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.174503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.174540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.174577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.174613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.174650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.174687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.174724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.174760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.174814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.174851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.174887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.174935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.174972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.174993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.175370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.175387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.177049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.177171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.751 [2024-07-15 13:19:08.177477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.177517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.751 [2024-07-15 13:19:08.177554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:58.751 [2024-07-15 13:19:08.177575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.177590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.177627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.177663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.177700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.177736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.177787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.177833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.177869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.177906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.177951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.177974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.177990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.178012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.178028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.178744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.178789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.178817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.178835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.178856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.178872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.178894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.178909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.178931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.178946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.178968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.178983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.179434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.179450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.181270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.181317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.181355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.181391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.181443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.752 [2024-07-15 13:19:08.181480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.181517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.181554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.752 [2024-07-15 13:19:08.181590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:58.752 [2024-07-15 13:19:08.181611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.181627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.181648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.181663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.181685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.181700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.181721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.181737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.181758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.181789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.181812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.181829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.181850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.181865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.181886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.181911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.181934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.181950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.181971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.181986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.182205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.182242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.182552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.182568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.184069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.184118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.184155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.184193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.184229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.753 [2024-07-15 13:19:08.184266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.184303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.184340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.184393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.184430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.184466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.184503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.184541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.184941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.184967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.184984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.185006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.753 [2024-07-15 13:19:08.185023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:58.753 [2024-07-15 13:19:08.185044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.185060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.185096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.185368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.185404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.185477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.185645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.185670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.186942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.186979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.187025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.187063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.187101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.187138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.187175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.187211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.754 [2024-07-15 13:19:08.187249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.187285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.187322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.187359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:58.754 [2024-07-15 13:19:08.187380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.754 [2024-07-15 13:19:08.187407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.187456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.187494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.187530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.187567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.187604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.187652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.187692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.187728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.187777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.187818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.187855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.187893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.187941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.187968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.187984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.188005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.188021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.188042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.188057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.188079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.188095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.190665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.190702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.190831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.190868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.190905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.190963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.190978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.191000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.755 [2024-07-15 13:19:08.191025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.191047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.191063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.191084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.191100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:58.755 [2024-07-15 13:19:08.191121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.755 [2024-07-15 13:19:08.191136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.191158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.191173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.192045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.192092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.192129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.192166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.192484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.192520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.192557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.192594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.192703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.192725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.192740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.193531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.193579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.193617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.193653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.193691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.193727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.193777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.193818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.193855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.193891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.193928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.193964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.193985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.194000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.194032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.194049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.194464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.194492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.194519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.194536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.194558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.756 [2024-07-15 13:19:08.194574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.194605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.194621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.194642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.194658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.194679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.756 [2024-07-15 13:19:08.194695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.756 [2024-07-15 13:19:08.194717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.194732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.194753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.194786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.194811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.194827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.194848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.194864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.194885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.194901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.194934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.194952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.194974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.194989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.195010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.195025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.195047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.195063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.195697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.195733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.195776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.195796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.195820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.195836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.195857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.195872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.195894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.195909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.195931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.195946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.195967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.195982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.196107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.196144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.196264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.196374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.196530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.196570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.196592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.757 [2024-07-15 13:19:08.196607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.200545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.200588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.200637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.200658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.200681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.200697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.200719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.200735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.200756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.200788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.200812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.200837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.200859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.200875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:58.757 [2024-07-15 13:19:08.200896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.757 [2024-07-15 13:19:08.200911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.200933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.200948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.200969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.200985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.758 [2024-07-15 13:19:08.201526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.201976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.201992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.202013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.202030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.202052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.202067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.202089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.202105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.203170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.203212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:58.758 [2024-07-15 13:19:08.203244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.758 [2024-07-15 13:19:08.203261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:58.759 [2024-07-15 13:19:08.203283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.759 [2024-07-15 13:19:08.203299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:58.759 [2024-07-15 13:19:08.203320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.759 [2024-07-15 13:19:08.203336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:58.759 [2024-07-15 13:19:08.203358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.759 [2024-07-15 13:19:08.203373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:58.759 [2024-07-15 13:19:08.203394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.759 [2024-07-15 13:19:08.203419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:58.759 [2024-07-15 13:19:08.203440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.759 [2024-07-15 13:19:08.203456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:58.759 [2024-07-15 13:19:08.203477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.759 [2024-07-15 13:19:08.203506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:58.759 Received shutdown signal, test time was about 34.691966 seconds 00:34:58.759 00:34:58.759 Latency(us) 00:34:58.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.759 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:58.759 Verification LBA range: start 0x0 length 0x4000 00:34:58.759 Nvme0n1 : 34.69 8535.16 33.34 0.00 0.00 14966.45 170.36 4026531.84 00:34:58.759 =================================================================================================================== 00:34:58.759 Total : 8535.16 33.34 0.00 0.00 14966.45 170.36 4026531.84 00:34:58.759 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:59.017 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:59.017 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:59.017 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:59.017 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # nvmfcleanup 00:34:59.017 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:59.017 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:59.017 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:59.017 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:59.017 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:59.017 rmmod nvme_tcp 00:34:59.017 rmmod nvme_fabrics 00:34:59.017 rmmod nvme_keyring 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # '[' -n 120907 ']' 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # killprocess 120907 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 120907 ']' 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 120907 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120907 00:34:59.275 killing process with pid 120907 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120907' 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 120907 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 120907 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@282 -- # remove_spdk_ns 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.275 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:59.276 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.276 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:34:59.276 00:34:59.276 real 0m40.713s 00:34:59.276 user 2m5.162s 00:34:59.276 sys 0m15.912s 00:34:59.276 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:59.276 ************************************ 00:34:59.276 13:19:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:59.276 END TEST nvmf_host_multipath_status 00:34:59.276 ************************************ 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@107 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:59.553 ************************************ 00:34:59.553 START TEST nvmf_discovery_remove_ifc 00:34:59.553 ************************************ 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:59.553 * Looking for test storage... 00:34:59.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.553 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # prepare_net_devs 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # local -g is_hw=no 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # remove_spdk_ns 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # nvmf_veth_init 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:34:59.554 Cannot find device "nvmf_tgt_br" 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:34:59.554 Cannot find device "nvmf_tgt_br2" 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # true 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:34:59.554 Cannot find device "nvmf_tgt_br" 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:34:59.554 Cannot find device "nvmf_tgt_br2" 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:34:59.554 13:19:11 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:34:59.554 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:59.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:59.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:34:59.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:34:59.813 00:34:59.813 --- 10.0.0.2 ping statistics --- 00:34:59.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.813 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:34:59.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:59.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:34:59.813 00:34:59.813 --- 10.0.0.3 ping statistics --- 00:34:59.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.813 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:59.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:34:59.813 00:34:59.813 --- 10.0.0.1 ping statistics --- 00:34:59.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.813 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@437 -- # return 0 00:34:59.813 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@485 -- # nvmfpid=122290 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@486 -- # waitforlisten 122290 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 122290 ']' 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:59.814 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.072 [2024-07-15 13:19:12.315709] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:00.072 [2024-07-15 13:19:12.318159] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:35:00.072 [2024-07-15 13:19:12.318316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:00.072 [2024-07-15 13:19:12.460967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.072 [2024-07-15 13:19:12.522695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:00.072 [2024-07-15 13:19:12.522807] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:00.072 [2024-07-15 13:19:12.522834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:00.072 [2024-07-15 13:19:12.522849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:00.072 [2024-07-15 13:19:12.522861] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:00.072 [2024-07-15 13:19:12.522902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.330 [2024-07-15 13:19:12.573621] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:00.330 [2024-07-15 13:19:12.573960] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:00.330 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.331 [2024-07-15 13:19:12.659707] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:00.331 [2024-07-15 13:19:12.671725] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:00.331 null0 00:35:00.331 [2024-07-15 13:19:12.703661] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=122330 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 122330 /tmp/host.sock 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 122330 ']' 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:00.331 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:00.331 13:19:12 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.331 [2024-07-15 13:19:12.787202] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:35:00.331 [2024-07-15 13:19:12.787302] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122330 ] 00:35:00.588 [2024-07-15 13:19:12.925262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.588 [2024-07-15 13:19:12.999613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:01.522 13:19:13 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.454 [2024-07-15 13:19:14.895463] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:02.454 [2024-07-15 13:19:14.895514] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:02.454 [2024-07-15 13:19:14.895549] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:02.712 [2024-07-15 13:19:14.981615] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:02.712 [2024-07-15 13:19:15.038581] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:02.712 [2024-07-15 13:19:15.038657] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:02.712 [2024-07-15 13:19:15.038686] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:02.712 [2024-07-15 13:19:15.038704] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:02.712 [2024-07-15 13:19:15.038730] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:02.712 [2024-07-15 13:19:15.039896] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ecc660 was disconnected and freed. delete nvme_qpair. 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:02.712 13:19:15 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:04.085 13:19:16 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:05.018 13:19:17 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:05.951 13:19:18 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:07.327 13:19:19 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:07.979 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:07.979 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:07.979 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.979 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:07.979 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.979 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:07.979 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.238 [2024-07-15 13:19:20.466963] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:08.238 [2024-07-15 13:19:20.467086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.238 [2024-07-15 13:19:20.467109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.238 [2024-07-15 13:19:20.467131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.238 [2024-07-15 13:19:20.467146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.238 [2024-07-15 13:19:20.467162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.238 [2024-07-15 13:19:20.467176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.238 [2024-07-15 13:19:20.467192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.238 [2024-07-15 13:19:20.467206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.238 [2024-07-15 13:19:20.467222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.238 [2024-07-15 13:19:20.467237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.238 [2024-07-15 13:19:20.467252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95920 is same with the state(5) to be set 00:35:08.238 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.238 [2024-07-15 13:19:20.476954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95920 (9): Bad file descriptor 00:35:08.238 [2024-07-15 13:19:20.486993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:08.238 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:08.238 13:19:20 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:09.169 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:09.169 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:09.169 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:09.169 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.169 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:09.169 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:09.169 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:09.169 [2024-07-15 13:19:21.527840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:09.169 [2024-07-15 13:19:21.527955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95920 with addr=10.0.0.2, port=4420 00:35:09.169 [2024-07-15 13:19:21.527988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e95920 is same with the state(5) to be set 00:35:09.169 [2024-07-15 13:19:21.528055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e95920 (9): Bad file descriptor 00:35:09.169 [2024-07-15 13:19:21.528597] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:09.170 [2024-07-15 13:19:21.528659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:09.170 [2024-07-15 13:19:21.528683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:09.170 [2024-07-15 13:19:21.528701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:09.170 [2024-07-15 13:19:21.528737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:09.170 [2024-07-15 13:19:21.528759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:09.170 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.170 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:09.170 13:19:21 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:10.103 [2024-07-15 13:19:22.528855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:10.103 [2024-07-15 13:19:22.528944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:10.103 [2024-07-15 13:19:22.528961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:10.103 [2024-07-15 13:19:22.528976] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:10.103 [2024-07-15 13:19:22.529010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:10.103 [2024-07-15 13:19:22.529051] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:10.103 [2024-07-15 13:19:22.529123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.103 [2024-07-15 13:19:22.529145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.103 [2024-07-15 13:19:22.529165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.103 [2024-07-15 13:19:22.529180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.103 [2024-07-15 13:19:22.529195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.103 [2024-07-15 13:19:22.529209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.103 [2024-07-15 13:19:22.529224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.103 [2024-07-15 13:19:22.529238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.103 [2024-07-15 13:19:22.529253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.103 [2024-07-15 13:19:22.529267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.103 [2024-07-15 13:19:22.529281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:10.103 [2024-07-15 13:19:22.529332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e383c0 (9): Bad file descriptor 00:35:10.103 [2024-07-15 13:19:22.530325] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:10.103 [2024-07-15 13:19:22.530361] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:10.103 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:10.103 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.103 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:10.103 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:10.103 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.103 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.103 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:10.103 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.359 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:10.360 13:19:22 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:11.293 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:11.293 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:11.293 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:11.293 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.293 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:11.293 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:11.293 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:11.293 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.294 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:11.294 13:19:23 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:12.226 [2024-07-15 13:19:24.533248] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:12.226 [2024-07-15 13:19:24.533303] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:12.226 [2024-07-15 13:19:24.533324] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:12.226 [2024-07-15 13:19:24.619411] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:12.226 [2024-07-15 13:19:24.675591] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:12.226 [2024-07-15 13:19:24.675675] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:12.226 [2024-07-15 13:19:24.675702] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:12.226 [2024-07-15 13:19:24.675720] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:12.226 [2024-07-15 13:19:24.675731] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:12.226 [2024-07-15 13:19:24.676847] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1eaa5c0 was disconnected and freed. delete nvme_qpair. 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 122330 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 122330 ']' 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 122330 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122330 00:35:12.484 killing process with pid 122330 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122330' 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 122330 00:35:12.484 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 122330 00:35:12.742 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:12.742 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # nvmfcleanup 00:35:12.742 13:19:24 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:12.742 rmmod nvme_tcp 00:35:12.742 rmmod nvme_fabrics 00:35:12.742 rmmod nvme_keyring 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # '[' -n 122290 ']' 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # killprocess 122290 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 122290 ']' 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 122290 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122290 00:35:12.742 killing process with pid 122290 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:12.742 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122290' 00:35:12.743 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 122290 00:35:12.743 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 122290 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@282 -- # remove_spdk_ns 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:35:13.000 00:35:13.000 real 0m13.539s 00:35:13.000 user 0m22.435s 00:35:13.000 sys 0m3.728s 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.000 ************************************ 00:35:13.000 END TEST nvmf_discovery_remove_ifc 00:35:13.000 ************************************ 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@108 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:13.000 ************************************ 00:35:13.000 START TEST nvmf_identify_kernel_target 00:35:13.000 ************************************ 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:13.000 * Looking for test storage... 00:35:13.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:13.000 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # '[' 1 -eq 1 ']' 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@12 -- # basename /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh 00:35:13.001 skipping identify_kernel_nvmf.sh test in the interrupt mode 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@12 -- # echo 'skipping identify_kernel_nvmf.sh test in the interrupt mode' 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # exit 0 00:35:13.001 00:35:13.001 real 0m0.092s 00:35:13.001 user 0m0.043s 00:35:13.001 sys 0m0.054s 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:13.001 13:19:25 nvmf_tcp_interrupt_mode.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.001 ************************************ 00:35:13.001 END TEST nvmf_identify_kernel_target 00:35:13.001 ************************************ 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@109 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:13.293 ************************************ 00:35:13.293 START TEST nvmf_auth_host 00:35:13.293 ************************************ 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:13.293 * Looking for test storage... 00:35:13.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.293 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@452 -- # prepare_net_devs 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@414 -- # local -g is_hw=no 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@416 -- # remove_spdk_ns 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmf_veth_init 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:35:13.294 Cannot find device "nvmf_tgt_br" 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:35:13.294 Cannot find device "nvmf_tgt_br2" 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@160 -- # true 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:35:13.294 Cannot find device "nvmf_tgt_br" 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:35:13.294 Cannot find device "nvmf_tgt_br2" 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:13.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:13.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:13.294 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:13.562 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:35:13.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:35:13.563 00:35:13.563 --- 10.0.0.2 ping statistics --- 00:35:13.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.563 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:35:13.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:13.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:35:13.563 00:35:13.563 --- 10.0.0.3 ping statistics --- 00:35:13.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.563 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:13.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:35:13.563 00:35:13.563 --- 10.0.0.1 ping statistics --- 00:35:13.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.563 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@437 -- # return 0 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@485 -- # nvmfpid=122780 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -L nvme_auth 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@486 -- # waitforlisten 122780 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 122780 ']' 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:13.563 13:19:25 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=null 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # key=fd00e57294246dfe0add70a80bd54191 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.oNl 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key fd00e57294246dfe0add70a80bd54191 0 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 fd00e57294246dfe0add70a80bd54191 0 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # key=fd00e57294246dfe0add70a80bd54191 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=0 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.oNl 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.oNl 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.oNl 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha512 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # len=64 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # key=5dad11aec4444d3ddfa33a534740291b2c47f81d72724a72f5ff2a0a030c2ec9 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.2rN 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 5dad11aec4444d3ddfa33a534740291b2c47f81d72724a72f5ff2a0a030c2ec9 3 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 5dad11aec4444d3ddfa33a534740291b2c47f81d72724a72f5ff2a0a030c2ec9 3 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # key=5dad11aec4444d3ddfa33a534740291b2c47f81d72724a72f5ff2a0a030c2ec9 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=3 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.2rN 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.2rN 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.2rN 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=null 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # len=48 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # key=5068a83abc6f0f9c4a1b3e332d2eb12ae05b47d10d3c4803 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.STm 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 5068a83abc6f0f9c4a1b3e332d2eb12ae05b47d10d3c4803 0 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 5068a83abc6f0f9c4a1b3e332d2eb12ae05b47d10d3c4803 0 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # key=5068a83abc6f0f9c4a1b3e332d2eb12ae05b47d10d3c4803 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=0 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.STm 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.STm 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.STm 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha384 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # len=48 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # key=7f29069ba6c90e247b23625b2c9d61894c9a1a6d5a41293f 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.wQG 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 7f29069ba6c90e247b23625b2c9d61894c9a1a6d5a41293f 2 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 7f29069ba6c90e247b23625b2c9d61894c9a1a6d5a41293f 2 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # key=7f29069ba6c90e247b23625b2c9d61894c9a1a6d5a41293f 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=2 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.wQG 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.wQG 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.wQG 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha256 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # key=afd9e297f1fc9c7fe3990a015d553807 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.jg0 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key afd9e297f1fc9c7fe3990a015d553807 1 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 afd9e297f1fc9c7fe3990a015d553807 1 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # key=afd9e297f1fc9c7fe3990a015d553807 00:35:14.936 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=1 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.jg0 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.jg0 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jg0 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha256 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # key=d40510c4dc65b288d46f4550e25c70ce 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.nxL 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key d40510c4dc65b288d46f4550e25c70ce 1 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 d40510c4dc65b288d46f4550e25c70ce 1 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # key=d40510c4dc65b288d46f4550e25c70ce 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=1 00:35:14.937 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.nxL 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.nxL 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.nxL 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha384 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # len=48 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # key=a27135249d810d8fee166ae6a23a5227fb2fc951bbdd5ed9 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.W5M 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key a27135249d810d8fee166ae6a23a5227fb2fc951bbdd5ed9 2 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 a27135249d810d8fee166ae6a23a5227fb2fc951bbdd5ed9 2 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # key=a27135249d810d8fee166ae6a23a5227fb2fc951bbdd5ed9 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=2 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.W5M 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.W5M 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.W5M 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=null 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # key=d87a71d6f1fbc5f7478328aea908498b 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.IXz 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key d87a71d6f1fbc5f7478328aea908498b 0 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 d87a71d6f1fbc5f7478328aea908498b 0 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # key=d87a71d6f1fbc5f7478328aea908498b 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=0 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.IXz 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.IXz 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.IXz 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha512 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@730 -- # len=64 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@731 -- # key=ad2eebeea6eabaab863158b6e7a70d47d7de72cc05613c8d0a41319e5235c333 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.r9I 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key ad2eebeea6eabaab863158b6e7a70d47d7de72cc05613c8d0a41319e5235c333 3 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 ad2eebeea6eabaab863158b6e7a70d47d7de72cc05613c8d0a41319e5235c333 3 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # key=ad2eebeea6eabaab863158b6e7a70d47d7de72cc05613c8d0a41319e5235c333 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=3 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.r9I 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.r9I 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.r9I 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 122780 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 122780 ']' 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:15.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:15.195 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oNl 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.2rN ]] 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2rN 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.STm 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.wQG ]] 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wQG 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.761 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:15.762 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jg0 00:35:15.762 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.762 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.762 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.762 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.nxL ]] 00:35:15.762 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nxL 00:35:15.762 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.762 13:19:27 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.W5M 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.IXz ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.IXz 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.r9I 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@636 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@638 -- # nvmet=/sys/kernel/config/nvmet 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@639 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@640 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@641 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@643 -- # local block nvme 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ ! -e /sys/module/nvmet ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@646 -- # modprobe nvmet 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@649 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:15.762 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:16.018 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:16.018 Waiting for block devices as requested 00:35:16.018 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:16.274 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:16.532 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:35:16.532 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:16.532 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@656 -- # is_block_zoned nvme0n1 00:35:16.532 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:16.532 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:16.532 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:16.532 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@657 -- # block_in_use nvme0n1 00:35:16.532 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:16.532 13:19:28 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:35:16.790 No valid GPT data, bailing 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n1 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n2 ]] 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@656 -- # is_block_zoned nvme0n2 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@657 -- # block_in_use nvme0n2 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:35:16.790 No valid GPT data, bailing 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n2 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n3 ]] 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@656 -- # is_block_zoned nvme0n3 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@657 -- # block_in_use nvme0n3 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:35:16.790 No valid GPT data, bailing 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n3 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@656 -- # is_block_zoned nvme1n1 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@657 -- # block_in_use nvme1n1 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:35:16.790 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:35:17.047 No valid GPT data, bailing 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@657 -- # nvme=/dev/nvme1n1 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@660 -- # [[ -b /dev/nvme1n1 ]] 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@662 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@663 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@664 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@669 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 1 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@672 -- # echo /dev/nvme1n1 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 1 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@675 -- # echo 10.0.0.1 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@676 -- # echo tcp 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@677 -- # echo 4420 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@678 -- # echo ipv4 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@681 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:17.047 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@684 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.1 -t tcp -s 4420 00:35:17.047 00:35:17.047 Discovery Log Number of Records 2, Generation counter 2 00:35:17.047 =====Discovery Log Entry 0====== 00:35:17.047 trtype: tcp 00:35:17.047 adrfam: ipv4 00:35:17.047 subtype: current discovery subsystem 00:35:17.047 treq: not specified, sq flow control disable supported 00:35:17.047 portid: 1 00:35:17.047 trsvcid: 4420 00:35:17.047 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:17.048 traddr: 10.0.0.1 00:35:17.048 eflags: none 00:35:17.048 sectype: none 00:35:17.048 =====Discovery Log Entry 1====== 00:35:17.048 trtype: tcp 00:35:17.048 adrfam: ipv4 00:35:17.048 subtype: nvme subsystem 00:35:17.048 treq: not specified, sq flow control disable supported 00:35:17.048 portid: 1 00:35:17.048 trsvcid: 4420 00:35:17.048 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:17.048 traddr: 10.0.0.1 00:35:17.048 eflags: none 00:35:17.048 sectype: none 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.048 nvme0n1 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.048 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:17.344 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.345 nvme0n1 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.345 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.602 nvme0n1 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.602 13:19:29 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.859 nvme0n1 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:17.859 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.860 nvme0n1 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.860 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:18.117 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.118 nvme0n1 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.118 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.376 nvme0n1 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.376 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.634 nvme0n1 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.634 13:19:30 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.634 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.891 nvme0n1 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.892 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.149 nvme0n1 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.149 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.407 nvme0n1 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.407 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.670 nvme0n1 00:35:19.670 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.670 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.670 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.670 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.670 13:19:31 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.670 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.946 nvme0n1 00:35:19.946 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.946 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.946 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.946 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.946 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.946 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.203 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.460 nvme0n1 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.460 13:19:32 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.718 nvme0n1 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.718 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.982 nvme0n1 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.983 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:21.246 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.247 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.504 nvme0n1 00:35:21.504 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.504 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:21.505 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.762 13:19:33 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.326 nvme0n1 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:22.326 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.327 13:19:34 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.892 nvme0n1 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.892 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.893 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.457 nvme0n1 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.457 13:19:35 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.714 nvme0n1 00:35:23.714 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.714 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.714 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.714 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.714 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.972 13:19:36 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.903 nvme0n1 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.903 13:19:37 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.837 nvme0n1 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.837 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.402 nvme0n1 00:35:26.402 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.402 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.402 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.402 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.402 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.402 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.659 13:19:38 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.224 nvme0n1 00:35:27.224 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.224 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.224 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.224 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.224 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.224 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.482 13:19:39 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.055 nvme0n1 00:35:28.055 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.055 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.055 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.055 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.055 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.055 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.313 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.313 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.313 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.313 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.314 nvme0n1 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.314 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.573 nvme0n1 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.573 13:19:40 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.830 nvme0n1 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.830 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.831 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.088 nvme0n1 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.088 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.089 nvme0n1 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.089 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.347 nvme0n1 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.347 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:29.605 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.606 nvme0n1 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.606 13:19:41 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.606 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.867 nvme0n1 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:29.867 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.868 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.137 nvme0n1 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.137 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.399 nvme0n1 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.399 13:19:42 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.656 nvme0n1 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.656 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.912 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.169 nvme0n1 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.169 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.426 nvme0n1 00:35:31.426 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.426 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.426 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.427 13:19:43 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.684 nvme0n1 00:35:31.684 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.684 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.684 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.684 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.684 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.684 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.941 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.197 nvme0n1 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.197 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.198 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.761 nvme0n1 00:35:32.761 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.761 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.761 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.761 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.761 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.761 13:19:44 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:32.761 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.762 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.326 nvme0n1 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.326 13:19:45 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.890 nvme0n1 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.890 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.147 nvme0n1 00:35:34.147 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.147 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.147 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.147 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.147 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.147 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.405 13:19:46 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.970 nvme0n1 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:34.970 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.971 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.971 13:19:47 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.904 nvme0n1 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.904 13:19:48 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.835 nvme0n1 00:35:36.835 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.835 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.835 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.835 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.835 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.835 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.835 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.835 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.835 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.836 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.770 nvme0n1 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.770 13:19:49 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:37.770 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.771 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.704 nvme0n1 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:38.704 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.705 13:19:50 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.269 nvme0n1 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.269 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.527 nvme0n1 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.527 13:19:51 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.785 nvme0n1 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:39.785 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.786 nvme0n1 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.786 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.045 nvme0n1 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.045 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.046 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:40.046 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.046 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:40.046 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.046 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.046 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.046 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.046 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:40.303 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:40.303 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.304 nvme0n1 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.304 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.562 nvme0n1 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:40.562 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:40.563 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:40.563 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.563 13:19:52 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.820 nvme0n1 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:40.820 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.821 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.080 nvme0n1 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.080 nvme0n1 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.080 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.338 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.338 nvme0n1 00:35:41.339 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.339 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.339 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.339 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.339 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.339 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.596 13:19:53 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.854 nvme0n1 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:41.854 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.855 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.113 nvme0n1 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.113 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.371 nvme0n1 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:42.372 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:42.630 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.631 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:42.631 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:42.631 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:42.631 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:42.631 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.631 13:19:54 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.890 nvme0n1 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.890 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.148 nvme0n1 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:43.148 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.149 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.714 nvme0n1 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.714 13:19:55 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.972 nvme0n1 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.972 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.537 nvme0n1 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.537 13:19:56 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.859 nvme0n1 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.859 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.117 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.375 nvme0n1 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmQwMGU1NzI5NDI0NmRmZTBhZGQ3MGE4MGJkNTQxOTGEzdWP: 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: ]] 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRhZDExYWVjNDQ0NGQzZGRmYTMzYTUzNDc0MDI5MWIyYzQ3ZjgxZDcyNzI0YTcyZjVmZjJhMGEwMzBjMmVjOaS0aiI=: 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.375 13:19:57 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.306 nvme0n1 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.306 13:19:58 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.872 nvme0n1 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWZkOWUyOTdmMWZjOWM3ZmUzOTkwYTAxNWQ1NTM4MDeKBQ51: 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: ]] 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQwNTEwYzRkYzY1YjI4OGQ0NmY0NTUwZTI1YzcwY2U5nPgW: 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.872 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.803 nvme0n1 00:35:47.803 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.803 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.803 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.803 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.803 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.803 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.803 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTI3MTM1MjQ5ZDgxMGQ4ZmVlMTY2YWU2YTIzYTUyMjdmYjJmYzk1MWJiZGQ1ZWQ5i+ocpA==: 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: ]] 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDg3YTcxZDZmMWZiYzVmNzQ3ODMyOGFlYTkwODQ5OGLtzetj: 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.804 13:19:59 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.804 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.736 nvme0n1 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWQyZWViZWVhNmVhYmFhYjg2MzE1OGI2ZTdhNzBkNDdkN2RlNzJjYzA1NjEzYzhkMGE0MTMxOWU1MjM1YzMzM2a0+pM=: 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.736 13:20:00 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.339 nvme0n1 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA2OGE4M2FiYzZmMGY5YzRhMWIzZTMzMmQyZWIxMmFlMDViNDdkMTBkM2M0ODAzyEvDTw==: 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: ]] 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2YyOTA2OWJhNmM5MGUyNDdiMjM2MjViMmM5ZDYxODk0YzlhMWE2ZDVhNDEyOTNm6RHz8Q==: 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.339 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.340 2024/07/15 13:20:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:35:49.340 request: 00:35:49.340 { 00:35:49.340 "method": "bdev_nvme_attach_controller", 00:35:49.340 "params": { 00:35:49.340 "name": "nvme0", 00:35:49.340 "trtype": "tcp", 00:35:49.340 "traddr": "10.0.0.1", 00:35:49.340 "adrfam": "ipv4", 00:35:49.340 "trsvcid": "4420", 00:35:49.340 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:49.340 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:49.340 "prchk_reftag": false, 00:35:49.340 "prchk_guard": false, 00:35:49.340 "hdgst": false, 00:35:49.340 "ddgst": false 00:35:49.340 } 00:35:49.340 } 00:35:49.340 Got JSON-RPC error response 00:35:49.340 GoRPCClient: error on JSON-RPC call 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.340 2024/07/15 13:20:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:35:49.340 request: 00:35:49.340 { 00:35:49.340 "method": "bdev_nvme_attach_controller", 00:35:49.340 "params": { 00:35:49.340 "name": "nvme0", 00:35:49.340 "trtype": "tcp", 00:35:49.340 "traddr": "10.0.0.1", 00:35:49.340 "adrfam": "ipv4", 00:35:49.340 "trsvcid": "4420", 00:35:49.340 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:49.340 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:49.340 "prchk_reftag": false, 00:35:49.340 "prchk_guard": false, 00:35:49.340 "hdgst": false, 00:35:49.340 "ddgst": false, 00:35:49.340 "dhchap_key": "key2" 00:35:49.340 } 00:35:49.340 } 00:35:49.340 Got JSON-RPC error response 00:35:49.340 GoRPCClient: error on JSON-RPC call 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:49.340 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:35:49.597 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.598 2024/07/15 13:20:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:35:49.598 request: 00:35:49.598 { 00:35:49.598 "method": "bdev_nvme_attach_controller", 00:35:49.598 "params": { 00:35:49.598 "name": "nvme0", 00:35:49.598 "trtype": "tcp", 00:35:49.598 "traddr": "10.0.0.1", 00:35:49.598 "adrfam": "ipv4", 00:35:49.598 "trsvcid": "4420", 00:35:49.598 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:49.598 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:49.598 "prchk_reftag": false, 00:35:49.598 "prchk_guard": false, 00:35:49.598 "hdgst": false, 00:35:49.598 "ddgst": false, 00:35:49.598 "dhchap_key": "key1", 00:35:49.598 "dhchap_ctrlr_key": "ckey2" 00:35:49.598 } 00:35:49.598 } 00:35:49.598 Got JSON-RPC error response 00:35:49.598 GoRPCClient: error on JSON-RPC call 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@492 -- # nvmfcleanup 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:49.598 rmmod nvme_tcp 00:35:49.598 rmmod nvme_fabrics 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@493 -- # '[' -n 122780 ']' 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@494 -- # killprocess 122780 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 122780 ']' 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 122780 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122780 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:49.598 killing process with pid 122780 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122780' 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 122780 00:35:49.598 13:20:01 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 122780 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@282 -- # remove_spdk_ns 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@688 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@690 -- # echo 0 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@692 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@693 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:49.855 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@697 -- # modules=(/sys/module/nvmet/holders/*) 00:35:49.856 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@699 -- # modprobe -r nvmet_tcp nvmet 00:35:49.856 13:20:02 nvmf_tcp_interrupt_mode.nvmf_auth_host -- nvmf/common.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:50.789 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:50.789 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:50.789 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:35:50.789 13:20:03 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.oNl /tmp/spdk.key-null.STm /tmp/spdk.key-sha256.jg0 /tmp/spdk.key-sha384.W5M /tmp/spdk.key-sha512.r9I /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:35:50.789 13:20:03 nvmf_tcp_interrupt_mode.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:51.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:51.046 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:51.046 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:51.046 00:35:51.046 real 0m37.961s 00:35:51.046 user 0m18.185s 00:35:51.046 sys 0m3.647s 00:35:51.046 13:20:03 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:51.046 ************************************ 00:35:51.046 13:20:03 nvmf_tcp_interrupt_mode.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.046 END TEST nvmf_auth_host 00:35:51.046 ************************************ 00:35:51.046 13:20:03 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:35:51.046 13:20:03 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:35:51.046 13:20:03 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@112 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:51.046 13:20:03 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:51.046 13:20:03 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:51.046 13:20:03 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:51.046 ************************************ 00:35:51.046 START TEST nvmf_digest 00:35:51.046 ************************************ 00:35:51.046 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:51.303 * Looking for test storage... 00:35:51.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.303 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@452 -- # prepare_net_devs 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@414 -- # local -g is_hw=no 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@416 -- # remove_spdk_ns 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@436 -- # nvmf_veth_init 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:35:51.304 Cannot find device "nvmf_tgt_br" 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@159 -- # true 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:35:51.304 Cannot find device "nvmf_tgt_br2" 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@160 -- # true 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:35:51.304 Cannot find device "nvmf_tgt_br" 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@162 -- # true 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:35:51.304 Cannot find device "nvmf_tgt_br2" 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@163 -- # true 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:51.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@166 -- # true 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:51.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@167 -- # true 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:51.304 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:35:51.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:35:51.562 00:35:51.562 --- 10.0.0.2 ping statistics --- 00:35:51.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.562 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:35:51.562 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:51.562 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:35:51.562 00:35:51.562 --- 10.0.0.3 ping statistics --- 00:35:51.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.562 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:51.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:35:51.562 00:35:51.562 --- 10.0.0.1 ping statistics --- 00:35:51.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.562 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@437 -- # return 0 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:51.562 ************************************ 00:35:51.562 START TEST nvmf_digest_clean 00:35:51.562 ************************************ 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@485 -- # nvmfpid=124348 00:35:51.562 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@486 -- # waitforlisten 124348 00:35:51.563 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode --wait-for-rpc 00:35:51.563 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 124348 ']' 00:35:51.563 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.563 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:51.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.563 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.563 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:51.563 13:20:03 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:51.820 [2024-07-15 13:20:04.053144] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:51.820 [2024-07-15 13:20:04.055054] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:35:51.820 [2024-07-15 13:20:04.055163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.820 [2024-07-15 13:20:04.199429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.820 [2024-07-15 13:20:04.282576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.820 [2024-07-15 13:20:04.282681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.820 [2024-07-15 13:20:04.282701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.820 [2024-07-15 13:20:04.282717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.820 [2024-07-15 13:20:04.282732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.820 [2024-07-15 13:20:04.282826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.820 [2024-07-15 13:20:04.283453] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:52.753 [2024-07-15 13:20:05.123198] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:52.753 null0 00:35:52.753 [2024-07-15 13:20:05.140004] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:52.753 [2024-07-15 13:20:05.168025] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=124394 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 124394 /var/tmp/bperf.sock 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 124394 ']' 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:52.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:52.753 13:20:05 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:53.011 [2024-07-15 13:20:05.244529] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:35:53.011 [2024-07-15 13:20:05.244660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124394 ] 00:35:53.011 [2024-07-15 13:20:05.386227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.011 [2024-07-15 13:20:05.474562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.945 13:20:06 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:53.945 13:20:06 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:53.945 13:20:06 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:53.945 13:20:06 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:53.945 13:20:06 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:54.509 13:20:06 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:54.509 13:20:06 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:55.081 nvme0n1 00:35:55.081 13:20:07 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:55.081 13:20:07 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:55.081 Running I/O for 2 seconds... 00:35:56.983 00:35:56.983 Latency(us) 00:35:56.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.983 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:56.983 nvme0n1 : 2.01 17594.16 68.73 0.00 0.00 7264.78 3485.32 26929.34 00:35:56.983 =================================================================================================================== 00:35:56.983 Total : 17594.16 68.73 0.00 0.00 7264.78 3485.32 26929.34 00:35:56.983 0 00:35:56.983 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:56.983 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:56.983 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:56.983 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:56.983 | select(.opcode=="crc32c") 00:35:56.983 | "\(.module_name) \(.executed)"' 00:35:56.983 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 124394 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 124394 ']' 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 124394 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:57.241 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124394 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:57.499 killing process with pid 124394 00:35:57.499 Received shutdown signal, test time was about 2.000000 seconds 00:35:57.499 00:35:57.499 Latency(us) 00:35:57.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.499 =================================================================================================================== 00:35:57.499 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124394' 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 124394 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 124394 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:57.499 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:57.500 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=124485 00:35:57.500 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 124485 /var/tmp/bperf.sock 00:35:57.500 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 124485 ']' 00:35:57.500 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.500 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:57.500 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:57.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.500 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.500 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:57.500 13:20:09 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:57.500 [2024-07-15 13:20:09.945204] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:35:57.500 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:57.500 Zero copy mechanism will not be used. 00:35:57.500 [2024-07-15 13:20:09.945307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124485 ] 00:35:57.757 [2024-07-15 13:20:10.082518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.757 [2024-07-15 13:20:10.142515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.757 13:20:10 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:57.757 13:20:10 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:57.757 13:20:10 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:57.757 13:20:10 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:57.757 13:20:10 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:58.322 13:20:10 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:58.322 13:20:10 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:58.580 nvme0n1 00:35:58.580 13:20:11 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:58.580 13:20:11 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:58.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:58.838 Zero copy mechanism will not be used. 00:35:58.838 Running I/O for 2 seconds... 00:36:00.735 00:36:00.735 Latency(us) 00:36:00.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.735 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:00.735 nvme0n1 : 2.00 7259.22 907.40 0.00 0.00 2200.07 696.32 6434.44 00:36:00.735 =================================================================================================================== 00:36:00.735 Total : 7259.22 907.40 0.00 0.00 2200.07 696.32 6434.44 00:36:00.735 0 00:36:00.735 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:00.735 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:00.735 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:00.735 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:00.735 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:00.735 | select(.opcode=="crc32c") 00:36:00.735 | "\(.module_name) \(.executed)"' 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 124485 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 124485 ']' 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 124485 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124485 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:01.299 killing process with pid 124485 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124485' 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 124485 00:36:01.299 Received shutdown signal, test time was about 2.000000 seconds 00:36:01.299 00:36:01.299 Latency(us) 00:36:01.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.299 =================================================================================================================== 00:36:01.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:01.299 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 124485 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=124562 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 124562 /var/tmp/bperf.sock 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 124562 ']' 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:01.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:01.557 13:20:13 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:01.557 [2024-07-15 13:20:13.829686] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:01.557 [2024-07-15 13:20:13.829790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124562 ] 00:36:01.557 [2024-07-15 13:20:13.965460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.557 [2024-07-15 13:20:14.025723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.493 13:20:14 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:02.493 13:20:14 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:02.493 13:20:14 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:02.493 13:20:14 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:02.493 13:20:14 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:02.750 13:20:15 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:02.750 13:20:15 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:03.008 nvme0n1 00:36:03.008 13:20:15 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:03.008 13:20:15 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:03.008 Running I/O for 2 seconds... 00:36:05.530 00:36:05.530 Latency(us) 00:36:05.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.530 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:05.530 nvme0n1 : 2.01 21163.66 82.67 0.00 0.00 6041.53 2606.55 18945.86 00:36:05.530 =================================================================================================================== 00:36:05.530 Total : 21163.66 82.67 0.00 0.00 6041.53 2606.55 18945.86 00:36:05.530 0 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:05.530 | select(.opcode=="crc32c") 00:36:05.530 | "\(.module_name) \(.executed)"' 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 124562 00:36:05.530 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 124562 ']' 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 124562 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124562 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:05.531 killing process with pid 124562 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124562' 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 124562 00:36:05.531 Received shutdown signal, test time was about 2.000000 seconds 00:36:05.531 00:36:05.531 Latency(us) 00:36:05.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.531 =================================================================================================================== 00:36:05.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 124562 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=124646 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 124646 /var/tmp/bperf.sock 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 124646 ']' 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:05.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:05.531 13:20:17 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:05.788 [2024-07-15 13:20:18.049566] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:05.788 [2024-07-15 13:20:18.049694] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124646 ] 00:36:05.788 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:05.788 Zero copy mechanism will not be used. 00:36:05.788 [2024-07-15 13:20:18.191218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.046 [2024-07-15 13:20:18.272138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.046 13:20:18 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:06.046 13:20:18 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:06.046 13:20:18 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:06.046 13:20:18 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:06.046 13:20:18 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:06.611 13:20:18 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.611 13:20:18 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.868 nvme0n1 00:36:06.868 13:20:19 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:06.868 13:20:19 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:06.868 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:06.868 Zero copy mechanism will not be used. 00:36:06.868 Running I/O for 2 seconds... 00:36:08.765 00:36:08.765 Latency(us) 00:36:08.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.765 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:08.765 nvme0n1 : 2.00 6520.30 815.04 0.00 0.00 2447.67 1437.32 3872.58 00:36:08.765 =================================================================================================================== 00:36:08.765 Total : 6520.30 815.04 0.00 0.00 2447.67 1437.32 3872.58 00:36:08.765 0 00:36:08.765 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:08.765 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:08.765 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:08.765 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:08.765 | select(.opcode=="crc32c") 00:36:08.765 | "\(.module_name) \(.executed)"' 00:36:08.765 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:09.330 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:09.330 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:09.330 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:09.330 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:09.330 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 124646 00:36:09.330 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 124646 ']' 00:36:09.330 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 124646 00:36:09.330 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:09.330 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124646 00:36:09.331 killing process with pid 124646 00:36:09.331 Received shutdown signal, test time was about 2.000000 seconds 00:36:09.331 00:36:09.331 Latency(us) 00:36:09.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.331 =================================================================================================================== 00:36:09.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124646' 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 124646 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 124646 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 124348 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 124348 ']' 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 124348 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124348 00:36:09.331 killing process with pid 124348 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124348' 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 124348 00:36:09.331 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 124348 00:36:09.589 ************************************ 00:36:09.589 END TEST nvmf_digest_clean 00:36:09.589 ************************************ 00:36:09.589 00:36:09.589 real 0m17.973s 00:36:09.589 user 0m32.519s 00:36:09.589 sys 0m6.079s 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.589 ************************************ 00:36:09.589 START TEST nvmf_digest_error 00:36:09.589 ************************************ 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@485 -- # nvmfpid=124742 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode --wait-for-rpc 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@486 -- # waitforlisten 124742 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 124742 ']' 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:09.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:09.589 13:20:21 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.589 [2024-07-15 13:20:22.052239] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:09.589 [2024-07-15 13:20:22.053598] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:09.589 [2024-07-15 13:20:22.053684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:09.847 [2024-07-15 13:20:22.195189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.847 [2024-07-15 13:20:22.267532] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:09.847 [2024-07-15 13:20:22.267595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:09.847 [2024-07-15 13:20:22.267609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:09.847 [2024-07-15 13:20:22.267619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:09.847 [2024-07-15 13:20:22.267628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:09.847 [2024-07-15 13:20:22.267686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.847 [2024-07-15 13:20:22.268128] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:09.847 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:09.847 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:09.847 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:36:09.847 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:09.847 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:10.103 [2024-07-15 13:20:22.352295] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:10.103 [2024-07-15 13:20:22.415641] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:10.103 null0 00:36:10.103 [2024-07-15 13:20:22.428431] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.103 [2024-07-15 13:20:22.452653] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=124771 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 124771 /var/tmp/bperf.sock 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 124771 ']' 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:10.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:10.103 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:10.104 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:10.104 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:10.104 [2024-07-15 13:20:22.508071] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:10.104 [2024-07-15 13:20:22.508160] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124771 ] 00:36:10.361 [2024-07-15 13:20:22.645787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.361 [2024-07-15 13:20:22.716079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.617 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:10.617 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:10.618 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:10.618 13:20:22 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:10.875 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:10.875 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.875 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:10.875 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.875 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:10.875 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:11.441 nvme0n1 00:36:11.441 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:11.441 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.441 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:11.441 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.441 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:11.441 13:20:23 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:11.441 Running I/O for 2 seconds... 00:36:11.441 [2024-07-15 13:20:23.849134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.441 [2024-07-15 13:20:23.849209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.441 [2024-07-15 13:20:23.849226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.441 [2024-07-15 13:20:23.869871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.441 [2024-07-15 13:20:23.869944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.441 [2024-07-15 13:20:23.869960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.441 [2024-07-15 13:20:23.885341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.441 [2024-07-15 13:20:23.885418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.441 [2024-07-15 13:20:23.885434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.441 [2024-07-15 13:20:23.903011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.441 [2024-07-15 13:20:23.903087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.441 [2024-07-15 13:20:23.903105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:23.922463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:23.922564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:23.922584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:23.936151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:23.936240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:23.936265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:23.954703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:23.954789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:23.954807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:23.968314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:23.968385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:23.968402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:23.986729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:23.986815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:23.986833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.002585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.002662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.002678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.018452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.018528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.018544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.037030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.037104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.037119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.051793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.051864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.051880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.067294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.067375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.067391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.082873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.082943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.082960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.095061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.095139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.095154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.108222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.108297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.108321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.124869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.124941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.124957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.699 [2024-07-15 13:20:24.139774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.699 [2024-07-15 13:20:24.139844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.699 [2024-07-15 13:20:24.139861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.700 [2024-07-15 13:20:24.153916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.700 [2024-07-15 13:20:24.154009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.700 [2024-07-15 13:20:24.154036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.172215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.172296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.172313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.188054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.188131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.188147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.207004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.207082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.207098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.226633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.226710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.226726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.246175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.246249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.246267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.261315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.261387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.261403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.281991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.282079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.282096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.303057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.303145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.303167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.319431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.319508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.319524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.339020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.339101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.339119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.353367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.353445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.353462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.371747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.371831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.371848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.388736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.388822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.388839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.405847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.405921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.405939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.958 [2024-07-15 13:20:24.424379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:11.958 [2024-07-15 13:20:24.424486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.958 [2024-07-15 13:20:24.424511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.437342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.437421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.437437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.455049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.455132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.455149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.467849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.467916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.467931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.483123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.483209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.483227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.498987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.499060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.499077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.514818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.514895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.514912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.528058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.528134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.528150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.544173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.544247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.544263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.559532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.559608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.559625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.573182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.573261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.573278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.585485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.585567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.585583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.603272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.217 [2024-07-15 13:20:24.603352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.217 [2024-07-15 13:20:24.603369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.217 [2024-07-15 13:20:24.618144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.218 [2024-07-15 13:20:24.618218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.218 [2024-07-15 13:20:24.618234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.218 [2024-07-15 13:20:24.632093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.218 [2024-07-15 13:20:24.632167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.218 [2024-07-15 13:20:24.632182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.218 [2024-07-15 13:20:24.647474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.218 [2024-07-15 13:20:24.647551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.218 [2024-07-15 13:20:24.647568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.218 [2024-07-15 13:20:24.663276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.218 [2024-07-15 13:20:24.663350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.218 [2024-07-15 13:20:24.663366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.218 [2024-07-15 13:20:24.678195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.218 [2024-07-15 13:20:24.678272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.218 [2024-07-15 13:20:24.678291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.475 [2024-07-15 13:20:24.692626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.475 [2024-07-15 13:20:24.692723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.475 [2024-07-15 13:20:24.692744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.475 [2024-07-15 13:20:24.708856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.475 [2024-07-15 13:20:24.708946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.475 [2024-07-15 13:20:24.708969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.475 [2024-07-15 13:20:24.725228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.475 [2024-07-15 13:20:24.725344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.475 [2024-07-15 13:20:24.725371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.739979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.740056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.740072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.755171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.755268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.755289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.771485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.771565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.771581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.786387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.786447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.786462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.803379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.803469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.803490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.818573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.818646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.818662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.830834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.830936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.830963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.847435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.847530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.847558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.862885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.862957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.862973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.879032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.879107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.879123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.894129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.894214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.894231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.909291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.909391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.909412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.926603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.926708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.926729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.476 [2024-07-15 13:20:24.940307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.476 [2024-07-15 13:20:24.940417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.476 [2024-07-15 13:20:24.940440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:24.957863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:24.957996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:24.958022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:24.973997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:24.974106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:24.974131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:24.991098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:24.991178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:24.991195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.005135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.005222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.005241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.021120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.021197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.021213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.034663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.034739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.034755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.051271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.051349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.051364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.065482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.065585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.065603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.081302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.081377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.081393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.094107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.094183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.094198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.110760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.110846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.110863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.123708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.123795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.123814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.139067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.139141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.139157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.154722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.154818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.154836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.170311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.170389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.170406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.184860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.184959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.184980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.202153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.202259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.202286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:12.763 [2024-07-15 13:20:25.216648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:12.763 [2024-07-15 13:20:25.216755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.763 [2024-07-15 13:20:25.216791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.234255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.234335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.234352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.252007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.252080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.252098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.269758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.269829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.269845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.286566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.286653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.286669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.307100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.307202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.307231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.322981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.323080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.323106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.342710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.342822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.342850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.359160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.359262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.359283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.371328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.371404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.371422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.391678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.391745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.391776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.410401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.410478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.410495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.426136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.426208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.426223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.445263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.445369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.445395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.461832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.461907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.461922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.480996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.481080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.481101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.041 [2024-07-15 13:20:25.499856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.041 [2024-07-15 13:20:25.499934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.041 [2024-07-15 13:20:25.499950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.513268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.513346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.513362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.533026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.533137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.533166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.551473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.551583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.551609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.569199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.569311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.569338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.587173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.587269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.587295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.605343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.605460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.605486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.622346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.622453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.640922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.640998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.641015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.656225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.656292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.656308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.677081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.677154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.677170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.692461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.692534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.692550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.299 [2024-07-15 13:20:25.711063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.299 [2024-07-15 13:20:25.711157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.299 [2024-07-15 13:20:25.711175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.300 [2024-07-15 13:20:25.727447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.300 [2024-07-15 13:20:25.727531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.300 [2024-07-15 13:20:25.727553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.300 [2024-07-15 13:20:25.750507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.300 [2024-07-15 13:20:25.750582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.300 [2024-07-15 13:20:25.750598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.558 [2024-07-15 13:20:25.770529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.558 [2024-07-15 13:20:25.770601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.558 [2024-07-15 13:20:25.770617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.558 [2024-07-15 13:20:25.786569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.558 [2024-07-15 13:20:25.786633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.558 [2024-07-15 13:20:25.786649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.558 [2024-07-15 13:20:25.806882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.558 [2024-07-15 13:20:25.806950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.558 [2024-07-15 13:20:25.806966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.558 [2024-07-15 13:20:25.823053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17fc3e0) 00:36:13.558 [2024-07-15 13:20:25.823114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.558 [2024-07-15 13:20:25.823130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:13.558 00:36:13.558 Latency(us) 00:36:13.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.558 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:13.558 nvme0n1 : 2.00 15470.23 60.43 0.00 0.00 8263.42 4051.32 23712.12 00:36:13.558 =================================================================================================================== 00:36:13.558 Total : 15470.23 60.43 0.00 0.00 8263.42 4051.32 23712.12 00:36:13.558 0 00:36:13.558 13:20:25 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:13.558 13:20:25 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:13.558 13:20:25 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:13.558 13:20:25 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:13.558 | .driver_specific 00:36:13.558 | .nvme_error 00:36:13.558 | .status_code 00:36:13.558 | .command_transient_transport_error' 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 124771 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 124771 ']' 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 124771 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124771 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:13.816 killing process with pid 124771 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124771' 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 124771 00:36:13.816 Received shutdown signal, test time was about 2.000000 seconds 00:36:13.816 00:36:13.816 Latency(us) 00:36:13.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.816 =================================================================================================================== 00:36:13.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:13.816 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 124771 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=124845 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 124845 /var/tmp/bperf.sock 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 124845 ']' 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:14.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:14.074 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:14.075 [2024-07-15 13:20:26.424456] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:14.075 [2024-07-15 13:20:26.424554] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:36:14.075 Zero copy mechanism will not be used. 00:36:14.075 llocations --file-prefix=spdk_pid124845 ] 00:36:14.332 [2024-07-15 13:20:26.560522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.332 [2024-07-15 13:20:26.661709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.332 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:14.332 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:14.332 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:14.332 13:20:26 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:14.590 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:14.590 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.590 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:14.590 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.590 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:14.590 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:15.156 nvme0n1 00:36:15.156 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:15.156 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.156 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:15.156 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.156 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:15.156 13:20:27 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:15.156 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:15.156 Zero copy mechanism will not be used. 00:36:15.156 Running I/O for 2 seconds... 00:36:15.156 [2024-07-15 13:20:27.558546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.156 [2024-07-15 13:20:27.558649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.156 [2024-07-15 13:20:27.558678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.156 [2024-07-15 13:20:27.566803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.156 [2024-07-15 13:20:27.566901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.156 [2024-07-15 13:20:27.566927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.156 [2024-07-15 13:20:27.575111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.156 [2024-07-15 13:20:27.575224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.156 [2024-07-15 13:20:27.575256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.156 [2024-07-15 13:20:27.583647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.156 [2024-07-15 13:20:27.583792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.156 [2024-07-15 13:20:27.583825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.156 [2024-07-15 13:20:27.592190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.156 [2024-07-15 13:20:27.592297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.156 [2024-07-15 13:20:27.592323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.156 [2024-07-15 13:20:27.601119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.156 [2024-07-15 13:20:27.601225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.156 [2024-07-15 13:20:27.601253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.156 [2024-07-15 13:20:27.609675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.156 [2024-07-15 13:20:27.609794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.156 [2024-07-15 13:20:27.609824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.156 [2024-07-15 13:20:27.617032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.156 [2024-07-15 13:20:27.617107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.156 [2024-07-15 13:20:27.617127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.156 [2024-07-15 13:20:27.623889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.156 [2024-07-15 13:20:27.624011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.156 [2024-07-15 13:20:27.624045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.633566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.633646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.633666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.639785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.639886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.639918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.647603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.647709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.647743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.656612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.656737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.656793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.662204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.662273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.662291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.668701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.668793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.668813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.674712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.674810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.674829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.683915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.684035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.684066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.693187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.693293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.693324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.701438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.701548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.701579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.710067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.710160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.710191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.718860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.718931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.718957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.725543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.725603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.725621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.732055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.732113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.732130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.735649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.735713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.735731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.742270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.742364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.742396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.747508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.747559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.747574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.752966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.753019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.753034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.757295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.757344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.757360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.760465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.760510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.760525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.765847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.765893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.765908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.771152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.771199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.771214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.774709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.774753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.774784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.779419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.779464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.779478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.785240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.785291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.785306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.789564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.789618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.789634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.794802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.794853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.414 [2024-07-15 13:20:27.794868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.414 [2024-07-15 13:20:27.799060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.414 [2024-07-15 13:20:27.799111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.799126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.803827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.803879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.803894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.808919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.808969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.808984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.812834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.812880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.812894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.817111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.817162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.817178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.822882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.822939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.822954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.826796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.826843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.826858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.831465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.831514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.831528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.836787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.836836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.836851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.840671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.840719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.840734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.845963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.846014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.846030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.852175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.852241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.852266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.857248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.857312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.857337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.863360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.863424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.863440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.868857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.868933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.868949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.874496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.874608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.874634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.415 [2024-07-15 13:20:27.880531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.415 [2024-07-15 13:20:27.880637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.415 [2024-07-15 13:20:27.880664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.886877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.886968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.886996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.893412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.893513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.893540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.900468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.900557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.900583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.907179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.907259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.907294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.914641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.914727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.914754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.921920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.921993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.922018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.928505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.928582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.928603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.934524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.934611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.934636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.941276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.941352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.941373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.947931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.948036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.948064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.954044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.954105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.954121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.958086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.958137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.958153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.962483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.962549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.962565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.968386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.968449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.968465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.974006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.974089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.673 [2024-07-15 13:20:27.974116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.673 [2024-07-15 13:20:27.977813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.673 [2024-07-15 13:20:27.977882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:27.977909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:27.983649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:27.983748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:27.983791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:27.989908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:27.989988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:27.990016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:27.994883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:27.994956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:27.994981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:27.998437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:27.998503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:27.998530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.003536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.003617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.003644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.008584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.008672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.008700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.013885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.013970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.013997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.018244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.018318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.018343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.022832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.022906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.022930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.027315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.027392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.027418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.032255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.032331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.032358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.037556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.037642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.037669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.043204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.043280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.043308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.047245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.047309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.047333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.052130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.052202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.052226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.057464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.057563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.057580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.062192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.062256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.062272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.067291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.067352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.067367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.071392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.071448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.071464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.076336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.076405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.076420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.080870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.080936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.080952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.085292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.085362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.085377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.090041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.090105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.090120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.094221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.094307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.094324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.099377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.099453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.099469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.105036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.105123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.105139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.674 [2024-07-15 13:20:28.108565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.674 [2024-07-15 13:20:28.108644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.674 [2024-07-15 13:20:28.108665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.675 [2024-07-15 13:20:28.113640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.675 [2024-07-15 13:20:28.113726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.675 [2024-07-15 13:20:28.113745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.675 [2024-07-15 13:20:28.120049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.675 [2024-07-15 13:20:28.120151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.675 [2024-07-15 13:20:28.120172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.675 [2024-07-15 13:20:28.125931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.675 [2024-07-15 13:20:28.126033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.675 [2024-07-15 13:20:28.126054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.675 [2024-07-15 13:20:28.130051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.675 [2024-07-15 13:20:28.130126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.675 [2024-07-15 13:20:28.130143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.675 [2024-07-15 13:20:28.135902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.675 [2024-07-15 13:20:28.135987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.675 [2024-07-15 13:20:28.136003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.141538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.141635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.141656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.147194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.147290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.147308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.152431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.152554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.152572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.159060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.159148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.159165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.164621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.164700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.164716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.170029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.170130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.170151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.176339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.176442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.176467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.180826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.180895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.180912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.185743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.185833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.185848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.192120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.192236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.192258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.198234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.198319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.198335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.203877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.203958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.203974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.207354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.207430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.207446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.213781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.213853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.213869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.219372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.219476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.219497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.223542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.223618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.223634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.228254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.228333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.228350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.933 [2024-07-15 13:20:28.233659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.933 [2024-07-15 13:20:28.233733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.933 [2024-07-15 13:20:28.233749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.238509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.238574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.238590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.243526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.243602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.243621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.247690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.247757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.247790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.253475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.253545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.253561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.259971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.260048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.260065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.264868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.264959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.264980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.270268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.270341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.270357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.274978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.275066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.275086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.280788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.280863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.280879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.285983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.286060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.286077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.290218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.290291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.290307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.296192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.296281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.296301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.302831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.302908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.302924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.307550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.307621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.307637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.313539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.313627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.313651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.320837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.320935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.320962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.328936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.329038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.329064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.336641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.336742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.336794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.344395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.344493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.344520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.352583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.352685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.352713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.360207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.360320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.360348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.368025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.368121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.368147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.375236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.375327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.375367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.379905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.379985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.380018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.388006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.388103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.388130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:15.934 [2024-07-15 13:20:28.395555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:15.934 [2024-07-15 13:20:28.395649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.934 [2024-07-15 13:20:28.395692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.403460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.403554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.403581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.410629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.410719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.410741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.417274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.417359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.417384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.422898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.422977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.423003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.428405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.428489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.428512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.434748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.434851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.434876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.441242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.441328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.441353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.447799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.447887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.447916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.452856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.452928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.452954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.460086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.460186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.460215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.464998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.465073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.465099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.472445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.472551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.472579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.481643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.481781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.481808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.487355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.487480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.487510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.494791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.494898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.494926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.504473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.504580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.504607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.512487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.512608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.512636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.519131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.519231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.519259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.525977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.526080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.526103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.533997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.534113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.534141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.541653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.541784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.541834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.549908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.550016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.550047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.557578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.557681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.557718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.563263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.563363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.563391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.571083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.571191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.571220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.578279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.578375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.578404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.585497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.585606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.585635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.592387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.592505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.592534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.599907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.600028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.600058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.608026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.608109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.608139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.614748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.614858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.614883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.621483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.621609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.621638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.628980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.629078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.629106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.634895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.634985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.635012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.642876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.642982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.643010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.650346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.650446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.650475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.191 [2024-07-15 13:20:28.656384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.191 [2024-07-15 13:20:28.656469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.191 [2024-07-15 13:20:28.656499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.449 [2024-07-15 13:20:28.664302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.449 [2024-07-15 13:20:28.664404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.449 [2024-07-15 13:20:28.664434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.449 [2024-07-15 13:20:28.673294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.673405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.673434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.678303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.678399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.678428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.685883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.685986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.686016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.693851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.693959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.693988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.700656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.700758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.700818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.707385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.707478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.707508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.715478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.715588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.715616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.720526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.720622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.720650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.727081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.727180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.727209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.733650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.733747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.733798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.739504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.739605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.739629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.745674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.745749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.745782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.751265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.751347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.751364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.756498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.756583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.756601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.762448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.762517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.762533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.768975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.769044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.769060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.774869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.774932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.774947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.779199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.779266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.779296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.785296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.785383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.785403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.791715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.791819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.791852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.796187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.796250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.796267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.802304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.802377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.450 [2024-07-15 13:20:28.802393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.450 [2024-07-15 13:20:28.807573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.450 [2024-07-15 13:20:28.807642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.807676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.813205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.813272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.813288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.819735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.819813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.819830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.824579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.824643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.824660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.829811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.829875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.829891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.835898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.835963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.835978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.841122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.841197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.841213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.845924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.846008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.846024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.851990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.852070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.852086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.856743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.856818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.856835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.862130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.862192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.862208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.868112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.868184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.868209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.875301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.875395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.875423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.880355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.880431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.880456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.886658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.886755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.886797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.892625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.892712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.892736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.898039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.898106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.898122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.902107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.902185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.902209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.907385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.907451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.451 [2024-07-15 13:20:28.907466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.451 [2024-07-15 13:20:28.913512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.451 [2024-07-15 13:20:28.913576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.452 [2024-07-15 13:20:28.913592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.918570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.918631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.918646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.924683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.924747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.924779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.929645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.929706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.929722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.934828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.934914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.934930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.940338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.940433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.940450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.947037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.947117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.947133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.953623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.953710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.953727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.959640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.959744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.959759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.963326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.963387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.963403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.968102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.968164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.968178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.973454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.973526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.973542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.979542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.979607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.979622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.983646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.983727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.983744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.988111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.988173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.988189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.992952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.993031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.993046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:28.996674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:28.996734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:28.996749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:29.002114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:29.002201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:29.002231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:29.006893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:29.006967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:29.006984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:29.011909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:29.011974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:29.011990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:29.017663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.711 [2024-07-15 13:20:29.017737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.711 [2024-07-15 13:20:29.017754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.711 [2024-07-15 13:20:29.022121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.022206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.022227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.026069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.026128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.026143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.031041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.031102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.031118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.035213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.035270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.035285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.039887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.039941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.039957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.043877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.043951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.043981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.049256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.049332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.049357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.055079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.055154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.055181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.061061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.061123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.061139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.064502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.064550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.064564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.069517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.069587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.069607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.074265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.074320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.074335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.078811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.078860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.078875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.084311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.084395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.084418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.091447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.091532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.091557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.098253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.098347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.098372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.104864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.104960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.104986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.110549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.110625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.110648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.115632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.115714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.115730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.120415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.120480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.120495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.125745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.125823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.125839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.130748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.130823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.130838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.135697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.135786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.135813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.140599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.140671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.140688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.144494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.144559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.144575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.149938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.150011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.150027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.154618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.154679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.154694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.160256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.160316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.160332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.165499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.712 [2024-07-15 13:20:29.165569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.712 [2024-07-15 13:20:29.165585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.712 [2024-07-15 13:20:29.169847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.713 [2024-07-15 13:20:29.169898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.713 [2024-07-15 13:20:29.169912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.713 [2024-07-15 13:20:29.173346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.713 [2024-07-15 13:20:29.173420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.713 [2024-07-15 13:20:29.173445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.179925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.179992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.180008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.184858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.184912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.184927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.188676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.188733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.188749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.194125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.194196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.194212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.199751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.199828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.199844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.203826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.203885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.203901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.208730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.208808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.208824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.214116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.214187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.214202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.221426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.221507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.221524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.227362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.227437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.227453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.231234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.231293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.231309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.236942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.237004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.237020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.242090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.242173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.242194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.246038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.246089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.246104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.251515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.251577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.251592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.257588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.257659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.972 [2024-07-15 13:20:29.257676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.972 [2024-07-15 13:20:29.262980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.972 [2024-07-15 13:20:29.263048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.263064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.266791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.266848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.266863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.271150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.271205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.271219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.277072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.277153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.277179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.282132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.282189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.282205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.285689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.285745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.285777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.291001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.291092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.291119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.296584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.296649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.296665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.303005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.303071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.303086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.306797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.306852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.306866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.312228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.312291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.312306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.317953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.318024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.318041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.321739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.321822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.321843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.327265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.327347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.327369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.332369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.332454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.332475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.337041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.337105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.337129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.342134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.342203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.342218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.347150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.347208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.347224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.351195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.351263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.351280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.357288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.357382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.357407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.363130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.363220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.363241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.368560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.368628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.368644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.372229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.372280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.372295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.377615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.377696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.377717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.383138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.383202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.383217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.386511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.386575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.386605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.391649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.391738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.391781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.396310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.396377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.396400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.973 [2024-07-15 13:20:29.401203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.973 [2024-07-15 13:20:29.401277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.973 [2024-07-15 13:20:29.401302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.974 [2024-07-15 13:20:29.406697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.974 [2024-07-15 13:20:29.406788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.974 [2024-07-15 13:20:29.406813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.974 [2024-07-15 13:20:29.412267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.974 [2024-07-15 13:20:29.412342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.974 [2024-07-15 13:20:29.412372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.974 [2024-07-15 13:20:29.417169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.974 [2024-07-15 13:20:29.417263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.974 [2024-07-15 13:20:29.417292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:16.974 [2024-07-15 13:20:29.422641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.974 [2024-07-15 13:20:29.422715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.974 [2024-07-15 13:20:29.422738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:16.974 [2024-07-15 13:20:29.426833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.974 [2024-07-15 13:20:29.426894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.974 [2024-07-15 13:20:29.426917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:16.974 [2024-07-15 13:20:29.431620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.974 [2024-07-15 13:20:29.431692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.974 [2024-07-15 13:20:29.431715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:16.974 [2024-07-15 13:20:29.436500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:16.974 [2024-07-15 13:20:29.436590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:16.974 [2024-07-15 13:20:29.436617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.441184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.441246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.441262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.445997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.446057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.446072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.451569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.451640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.451671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.455990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.456049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.456066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.461557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.461623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.461639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.466974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.467042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.467058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.472350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.472423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.472440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.478230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.478301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.478317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.483683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.483757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.483790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.487822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.487883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.487899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.493675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.493747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.250 [2024-07-15 13:20:29.493779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.250 [2024-07-15 13:20:29.499490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.250 [2024-07-15 13:20:29.499560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.499576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.251 [2024-07-15 13:20:29.503568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.251 [2024-07-15 13:20:29.503641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.503676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.251 [2024-07-15 13:20:29.508941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.251 [2024-07-15 13:20:29.509008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.509024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.251 [2024-07-15 13:20:29.515702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.251 [2024-07-15 13:20:29.515823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.515846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.251 [2024-07-15 13:20:29.519582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.251 [2024-07-15 13:20:29.519646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.519685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.251 [2024-07-15 13:20:29.524804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.251 [2024-07-15 13:20:29.524873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.524888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.251 [2024-07-15 13:20:29.530357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.251 [2024-07-15 13:20:29.530429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.530445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.251 [2024-07-15 13:20:29.536432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.251 [2024-07-15 13:20:29.536502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.536517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.251 [2024-07-15 13:20:29.540956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.251 [2024-07-15 13:20:29.541022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.541039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.251 [2024-07-15 13:20:29.547027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb04380) 00:36:17.251 [2024-07-15 13:20:29.547100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.251 [2024-07-15 13:20:29.547116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.251 00:36:17.251 Latency(us) 00:36:17.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:17.251 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:17.251 nvme0n1 : 2.00 5421.15 677.64 0.00 0.00 2946.44 640.47 9830.40 00:36:17.251 =================================================================================================================== 00:36:17.251 Total : 5421.15 677.64 0.00 0.00 2946.44 640.47 9830.40 00:36:17.251 0 00:36:17.251 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:17.251 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:17.251 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:17.251 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:17.251 | .driver_specific 00:36:17.251 | .nvme_error 00:36:17.251 | .status_code 00:36:17.251 | .command_transient_transport_error' 00:36:17.508 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 349 > 0 )) 00:36:17.509 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 124845 00:36:17.509 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 124845 ']' 00:36:17.509 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 124845 00:36:17.509 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:17.509 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:17.509 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124845 00:36:17.767 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:17.767 killing process with pid 124845 00:36:17.767 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:17.767 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124845' 00:36:17.767 Received shutdown signal, test time was about 2.000000 seconds 00:36:17.767 00:36:17.767 Latency(us) 00:36:17.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:17.767 =================================================================================================================== 00:36:17.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:17.767 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 124845 00:36:17.767 13:20:29 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 124845 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=124922 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 124922 /var/tmp/bperf.sock 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 124922 ']' 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:17.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:17.767 13:20:30 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:18.024 [2024-07-15 13:20:30.283167] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:18.024 [2024-07-15 13:20:30.284045] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124922 ] 00:36:18.024 [2024-07-15 13:20:30.453239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.283 [2024-07-15 13:20:30.540267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:19.216 13:20:31 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:19.782 nvme0n1 00:36:19.782 13:20:32 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:19.782 13:20:32 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.782 13:20:32 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:19.782 13:20:32 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.782 13:20:32 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:19.782 13:20:32 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:19.782 Running I/O for 2 seconds... 00:36:19.782 [2024-07-15 13:20:32.227080] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190df988 00:36:19.782 [2024-07-15 13:20:32.228089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:19.782 [2024-07-15 13:20:32.228137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:19.782 [2024-07-15 13:20:32.242899] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e1b48 00:36:19.782 [2024-07-15 13:20:32.243920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:19.782 [2024-07-15 13:20:32.243975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:20.038 [2024-07-15 13:20:32.256286] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ea248 00:36:20.038 [2024-07-15 13:20:32.257455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.038 [2024-07-15 13:20:32.257512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:20.038 [2024-07-15 13:20:32.271922] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f8618 00:36:20.038 [2024-07-15 13:20:32.273753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.038 [2024-07-15 13:20:32.273818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:20.038 [2024-07-15 13:20:32.281181] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:20.038 [2024-07-15 13:20:32.281964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.038 [2024-07-15 13:20:32.282005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:20.038 [2024-07-15 13:20:32.296312] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190de8a8 00:36:20.038 [2024-07-15 13:20:32.297850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.038 [2024-07-15 13:20:32.297898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:20.038 [2024-07-15 13:20:32.308201] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e3d08 00:36:20.038 [2024-07-15 13:20:32.309672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.038 [2024-07-15 13:20:32.309720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:20.038 [2024-07-15 13:20:32.321298] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ec408 00:36:20.038 [2024-07-15 13:20:32.322576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.322636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.334431] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190eaef0 00:36:20.039 [2024-07-15 13:20:32.335728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.335792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.350227] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190eff18 00:36:20.039 [2024-07-15 13:20:32.352558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.352603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.359389] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e3d08 00:36:20.039 [2024-07-15 13:20:32.360379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.360420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.374801] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190df988 00:36:20.039 [2024-07-15 13:20:32.376549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.376612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.386650] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f7da8 00:36:20.039 [2024-07-15 13:20:32.388301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.388353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.398995] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f6890 00:36:20.039 [2024-07-15 13:20:32.400401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.400460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.411356] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e3498 00:36:20.039 [2024-07-15 13:20:32.413469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.413526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.424985] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e9e10 00:36:20.039 [2024-07-15 13:20:32.426170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.426231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.438382] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ef270 00:36:20.039 [2024-07-15 13:20:32.439111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.439163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.450958] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190eff18 00:36:20.039 [2024-07-15 13:20:32.452132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.452175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.466491] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e1b48 00:36:20.039 [2024-07-15 13:20:32.468287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.468340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.475612] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190eaef0 00:36:20.039 [2024-07-15 13:20:32.476429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.476473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.490970] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ff3c8 00:36:20.039 [2024-07-15 13:20:32.492447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.492494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:20.039 [2024-07-15 13:20:32.502683] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190dfdc0 00:36:20.039 [2024-07-15 13:20:32.504176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.039 [2024-07-15 13:20:32.504230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.515312] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f20d8 00:36:20.296 [2024-07-15 13:20:32.516516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.516573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.530585] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e7c50 00:36:20.296 [2024-07-15 13:20:32.532478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.532525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.539556] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ed0b0 00:36:20.296 [2024-07-15 13:20:32.540511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.540557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.552192] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f3a28 00:36:20.296 [2024-07-15 13:20:32.553090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.553143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.567603] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ef270 00:36:20.296 [2024-07-15 13:20:32.569504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.569556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.579995] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f6020 00:36:20.296 [2024-07-15 13:20:32.581941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.581998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.592586] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e9e10 00:36:20.296 [2024-07-15 13:20:32.594068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.594113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.604325] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f0bc0 00:36:20.296 [2024-07-15 13:20:32.605687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.605741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.617588] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f3a28 00:36:20.296 [2024-07-15 13:20:32.618742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.618797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.632837] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190dfdc0 00:36:20.296 [2024-07-15 13:20:32.634753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.634813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.642187] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ff3c8 00:36:20.296 [2024-07-15 13:20:32.643021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.643058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.657511] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e4578 00:36:20.296 [2024-07-15 13:20:32.659007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.659065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.669455] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fe720 00:36:20.296 [2024-07-15 13:20:32.670647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.670696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.681612] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f8e88 00:36:20.296 [2024-07-15 13:20:32.682727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.682801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.693267] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f0bc0 00:36:20.296 [2024-07-15 13:20:32.694180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.694225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.707852] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f3a28 00:36:20.296 [2024-07-15 13:20:32.709554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.296 [2024-07-15 13:20:32.709616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:20.296 [2024-07-15 13:20:32.720835] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fc998 00:36:20.297 [2024-07-15 13:20:32.722567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.297 [2024-07-15 13:20:32.722615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:20.297 [2024-07-15 13:20:32.737788] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ef6a8 00:36:20.297 [2024-07-15 13:20:32.739863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.297 [2024-07-15 13:20:32.739915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:20.297 [2024-07-15 13:20:32.747324] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f57b0 00:36:20.297 [2024-07-15 13:20:32.748525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.297 [2024-07-15 13:20:32.748572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:20.297 [2024-07-15 13:20:32.763271] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ea248 00:36:20.553 [2024-07-15 13:20:32.765232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.553 [2024-07-15 13:20:32.765280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.553 [2024-07-15 13:20:32.774915] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e3d08 00:36:20.553 [2024-07-15 13:20:32.776021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.553 [2024-07-15 13:20:32.776068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:20.553 [2024-07-15 13:20:32.788625] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190df118 00:36:20.553 [2024-07-15 13:20:32.790174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.553 [2024-07-15 13:20:32.790227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.553 [2024-07-15 13:20:32.800968] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f6020 00:36:20.553 [2024-07-15 13:20:32.802112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.553 [2024-07-15 13:20:32.802184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:20.553 [2024-07-15 13:20:32.812832] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ef6a8 00:36:20.553 [2024-07-15 13:20:32.813884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.553 [2024-07-15 13:20:32.813939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:20.553 [2024-07-15 13:20:32.824663] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fb8b8 00:36:20.553 [2024-07-15 13:20:32.825555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.553 [2024-07-15 13:20:32.825610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:20.553 [2024-07-15 13:20:32.839981] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190eea00 00:36:20.553 [2024-07-15 13:20:32.841540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.553 [2024-07-15 13:20:32.841604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:20.553 [2024-07-15 13:20:32.853895] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fc560 00:36:20.553 [2024-07-15 13:20:32.856012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.856074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.863935] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190df988 00:36:20.554 [2024-07-15 13:20:32.864899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.864955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.881004] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f9f68 00:36:20.554 [2024-07-15 13:20:32.882703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.882759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.893791] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e73e0 00:36:20.554 [2024-07-15 13:20:32.896351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.896424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.908986] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f0350 00:36:20.554 [2024-07-15 13:20:32.911020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.911079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.918863] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:20.554 [2024-07-15 13:20:32.919937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.920012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.933824] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f8a50 00:36:20.554 [2024-07-15 13:20:32.934839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.934903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.950648] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e5220 00:36:20.554 [2024-07-15 13:20:32.952779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.952856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.960905] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e01f8 00:36:20.554 [2024-07-15 13:20:32.962033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.962099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.977624] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ebfd0 00:36:20.554 [2024-07-15 13:20:32.979481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.979550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:32.990485] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e5ec8 00:36:20.554 [2024-07-15 13:20:32.992255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:32.992309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:33.003477] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e3060 00:36:20.554 [2024-07-15 13:20:33.005057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:33.005113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.554 [2024-07-15 13:20:33.015781] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f4298 00:36:20.554 [2024-07-15 13:20:33.017124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.554 [2024-07-15 13:20:33.017168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.030041] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f3e60 00:36:20.812 [2024-07-15 13:20:33.031049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.031105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.045645] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f3e60 00:36:20.812 [2024-07-15 13:20:33.046651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.046713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.061638] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f3e60 00:36:20.812 [2024-07-15 13:20:33.062637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.062696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.077835] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f3e60 00:36:20.812 [2024-07-15 13:20:33.078837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.078891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.094163] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e3060 00:36:20.812 [2024-07-15 13:20:33.095382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.095438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.109072] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e8d30 00:36:20.812 [2024-07-15 13:20:33.110243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.110286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.122132] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f46d0 00:36:20.812 [2024-07-15 13:20:33.123382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.123440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.138990] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e3060 00:36:20.812 [2024-07-15 13:20:33.140748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.140815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.148002] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e5220 00:36:20.812 [2024-07-15 13:20:33.148735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.148785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.163721] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190dece0 00:36:20.812 [2024-07-15 13:20:33.165452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.165491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.175905] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ebfd0 00:36:20.812 [2024-07-15 13:20:33.177647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.177687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.184225] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e6300 00:36:20.812 [2024-07-15 13:20:33.185001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.185039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.196867] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190de038 00:36:20.812 [2024-07-15 13:20:33.197608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.197658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.212019] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f0350 00:36:20.812 [2024-07-15 13:20:33.213642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.213682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.221831] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:20.812 [2024-07-15 13:20:33.222755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.222805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.236817] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f7538 00:36:20.812 [2024-07-15 13:20:33.238404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.238446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:20.812 [2024-07-15 13:20:33.247736] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e8d30 00:36:20.812 [2024-07-15 13:20:33.249796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.812 [2024-07-15 13:20:33.249840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:20.813 [2024-07-15 13:20:33.261481] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f0bc0 00:36:20.813 [2024-07-15 13:20:33.262987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.813 [2024-07-15 13:20:33.263028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:20.813 [2024-07-15 13:20:33.276694] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fc560 00:36:20.813 [2024-07-15 13:20:33.278258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:20.813 [2024-07-15 13:20:33.278308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.289060] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190dece0 00:36:21.071 [2024-07-15 13:20:33.290371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.290415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.301524] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ebb98 00:36:21.071 [2024-07-15 13:20:33.302389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.302429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.312694] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fda78 00:36:21.071 [2024-07-15 13:20:33.313704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.313744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.328952] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190dece0 00:36:21.071 [2024-07-15 13:20:33.330736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.330799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.342938] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190de038 00:36:21.071 [2024-07-15 13:20:33.344941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.344987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.356884] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f57b0 00:36:21.071 [2024-07-15 13:20:33.359058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.359099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.366314] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f92c0 00:36:21.071 [2024-07-15 13:20:33.367257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.367293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.382207] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f0788 00:36:21.071 [2024-07-15 13:20:33.383850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.383896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.394712] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fa7d8 00:36:21.071 [2024-07-15 13:20:33.396169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.396212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.407755] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190feb58 00:36:21.071 [2024-07-15 13:20:33.409412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.409467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.419573] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e4578 00:36:21.071 [2024-07-15 13:20:33.420715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.420794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.433514] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e3060 00:36:21.071 [2024-07-15 13:20:33.435273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.435345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.446843] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ee5c8 00:36:21.071 [2024-07-15 13:20:33.448815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.448880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.461280] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f2d80 00:36:21.071 [2024-07-15 13:20:33.463307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.463369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.473608] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e95a0 00:36:21.071 [2024-07-15 13:20:33.475329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.475392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.484231] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f92c0 00:36:21.071 [2024-07-15 13:20:33.485142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.485196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.495940] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e7c50 00:36:21.071 [2024-07-15 13:20:33.496970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.497021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.511315] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e0630 00:36:21.071 [2024-07-15 13:20:33.512588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.512651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.521790] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f2d80 00:36:21.071 [2024-07-15 13:20:33.522641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.522689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:21.071 [2024-07-15 13:20:33.535744] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e84c0 00:36:21.071 [2024-07-15 13:20:33.536991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.071 [2024-07-15 13:20:33.537046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.547303] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f4f40 00:36:21.329 [2024-07-15 13:20:33.548324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.548382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.562294] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190eaef0 00:36:21.329 [2024-07-15 13:20:33.564162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.564217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.572282] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ea680 00:36:21.329 [2024-07-15 13:20:33.573153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.573224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.589425] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e88f8 00:36:21.329 [2024-07-15 13:20:33.591069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.591139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.602454] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e9168 00:36:21.329 [2024-07-15 13:20:33.605059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.605144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.615587] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ff3c8 00:36:21.329 [2024-07-15 13:20:33.616828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.616905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.633432] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f8618 00:36:21.329 [2024-07-15 13:20:33.635470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.635520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.644636] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190de470 00:36:21.329 [2024-07-15 13:20:33.646174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.646222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.657812] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f8a50 00:36:21.329 [2024-07-15 13:20:33.659506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.659560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.670310] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ed920 00:36:21.329 [2024-07-15 13:20:33.672465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.672527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.683310] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f6890 00:36:21.329 [2024-07-15 13:20:33.684390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.684437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.694827] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190df988 00:36:21.329 [2024-07-15 13:20:33.695796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.695842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.710301] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ebb98 00:36:21.329 [2024-07-15 13:20:33.712068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.712119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.719949] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ee190 00:36:21.329 [2024-07-15 13:20:33.720704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.720749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.736393] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e0ea0 00:36:21.329 [2024-07-15 13:20:33.738031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.738095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.748867] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f2510 00:36:21.329 [2024-07-15 13:20:33.751100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.751161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.762233] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f9b30 00:36:21.329 [2024-07-15 13:20:33.763057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.763124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.776857] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e9e10 00:36:21.329 [2024-07-15 13:20:33.778359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.778411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:21.329 [2024-07-15 13:20:33.789241] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190eff18 00:36:21.329 [2024-07-15 13:20:33.791795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.329 [2024-07-15 13:20:33.791847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:21.587 [2024-07-15 13:20:33.803976] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ee5c8 00:36:21.587 [2024-07-15 13:20:33.805274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.587 [2024-07-15 13:20:33.805327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:21.587 [2024-07-15 13:20:33.818819] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f46d0 00:36:21.587 [2024-07-15 13:20:33.820454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.587 [2024-07-15 13:20:33.820508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:21.587 [2024-07-15 13:20:33.831388] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f6458 00:36:21.587 [2024-07-15 13:20:33.833502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.587 [2024-07-15 13:20:33.833555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:21.587 [2024-07-15 13:20:33.844459] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e01f8 00:36:21.587 [2024-07-15 13:20:33.845513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.587 [2024-07-15 13:20:33.845563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:21.587 [2024-07-15 13:20:33.856076] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ea248 00:36:21.587 [2024-07-15 13:20:33.856997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.587 [2024-07-15 13:20:33.857049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:21.587 [2024-07-15 13:20:33.867704] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ecc78 00:36:21.587 [2024-07-15 13:20:33.868474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.587 [2024-07-15 13:20:33.868526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:21.587 [2024-07-15 13:20:33.882951] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e4de8 00:36:21.587 [2024-07-15 13:20:33.884543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.587 [2024-07-15 13:20:33.884591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:21.587 [2024-07-15 13:20:33.894591] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190eaef0 00:36:21.587 [2024-07-15 13:20:33.896530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.587 [2024-07-15 13:20:33.896590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:21.587 [2024-07-15 13:20:33.907943] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190eaef0 00:36:21.587 [2024-07-15 13:20:33.909153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.587 [2024-07-15 13:20:33.909207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:33.922707] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e84c0 00:36:21.588 [2024-07-15 13:20:33.924723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:33.924788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:33.932466] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ec840 00:36:21.588 [2024-07-15 13:20:33.933394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:33.933439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:33.948837] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e7818 00:36:21.588 [2024-07-15 13:20:33.950523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:33.950575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:33.961227] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190de8a8 00:36:21.588 [2024-07-15 13:20:33.963558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:33.963606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:33.974882] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f0ff8 00:36:21.588 [2024-07-15 13:20:33.976270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:33.976315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:33.987175] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f6890 00:36:21.588 [2024-07-15 13:20:33.988234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:33.988286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:34.003327] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ec840 00:36:21.588 [2024-07-15 13:20:34.005691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:34.005737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:34.015285] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f2d80 00:36:21.588 [2024-07-15 13:20:34.016627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:34.016675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:34.027759] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ebb98 00:36:21.588 [2024-07-15 13:20:34.028618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:34.028659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:21.588 [2024-07-15 13:20:34.039815] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e7c50 00:36:21.588 [2024-07-15 13:20:34.040915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.588 [2024-07-15 13:20:34.040957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.058392] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190de038 00:36:21.846 [2024-07-15 13:20:34.060054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.060109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.071228] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f6cc8 00:36:21.846 [2024-07-15 13:20:34.073548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.073613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.085888] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ee190 00:36:21.846 [2024-07-15 13:20:34.087216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.087276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.103013] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e6b70 00:36:21.846 [2024-07-15 13:20:34.104949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.104999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.112688] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e4578 00:36:21.846 [2024-07-15 13:20:34.113623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.113691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.128429] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e4de8 00:36:21.846 [2024-07-15 13:20:34.130292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.130352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.140897] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190e3d08 00:36:21.846 [2024-07-15 13:20:34.142267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.142312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.153566] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190ed920 00:36:21.846 [2024-07-15 13:20:34.154780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.154821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.168758] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190de8a8 00:36:21.846 [2024-07-15 13:20:34.170459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.170506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.180911] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fb048 00:36:21.846 [2024-07-15 13:20:34.182474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.182526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.194187] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f7100 00:36:21.846 [2024-07-15 13:20:34.195524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.195572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:21.846 [2024-07-15 13:20:34.209747] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190f2d80 00:36:21.846 [2024-07-15 13:20:34.212232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:21.846 [2024-07-15 13:20:34.212282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:21.846 00:36:21.846 Latency(us) 00:36:21.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.846 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:21.846 nvme0n1 : 2.00 19160.07 74.84 0.00 0.00 6673.13 2591.65 19065.02 00:36:21.846 =================================================================================================================== 00:36:21.846 Total : 19160.07 74.84 0.00 0.00 6673.13 2591.65 19065.02 00:36:21.846 0 00:36:21.846 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:21.846 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:21.846 | .driver_specific 00:36:21.846 | .nvme_error 00:36:21.846 | .status_code 00:36:21.846 | .command_transient_transport_error' 00:36:21.846 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:21.846 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 124922 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 124922 ']' 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 124922 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124922 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:22.104 killing process with pid 124922 00:36:22.104 Received shutdown signal, test time was about 2.000000 seconds 00:36:22.104 00:36:22.104 Latency(us) 00:36:22.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.104 =================================================================================================================== 00:36:22.104 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124922' 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 124922 00:36:22.104 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 124922 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=125007 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 125007 /var/tmp/bperf.sock 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 125007 ']' 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:22.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:22.362 13:20:34 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:22.362 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:22.362 Zero copy mechanism will not be used. 00:36:22.362 [2024-07-15 13:20:34.777109] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:22.363 [2024-07-15 13:20:34.777255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125007 ] 00:36:22.620 [2024-07-15 13:20:34.921090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.620 [2024-07-15 13:20:34.992725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.620 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:22.620 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:22.620 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:22.620 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:23.275 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:23.275 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.275 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:23.275 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.275 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.275 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.533 nvme0n1 00:36:23.533 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:23.533 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.533 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:23.533 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.533 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:23.533 13:20:35 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.793 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:23.793 Zero copy mechanism will not be used. 00:36:23.793 Running I/O for 2 seconds... 00:36:23.793 [2024-07-15 13:20:36.035192] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.035549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.035584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.039889] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.040202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.040229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.044454] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.044614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.044644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.048948] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.049038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.049063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.053451] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.053546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.053571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.057986] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.058079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.058106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.062793] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.062930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.062956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.067585] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.067844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.067871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.072381] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.072603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.072635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.077051] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.077203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.077229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.081916] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.082084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.082111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.086616] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.086735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.086776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.091219] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.091312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.091338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.095846] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.095975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.096001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.100576] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.100751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.100813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.105367] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.105621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.105668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.110042] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.110182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.110226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.793 [2024-07-15 13:20:36.114825] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.793 [2024-07-15 13:20:36.115110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.793 [2024-07-15 13:20:36.115156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.119631] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.119950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.119994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.124453] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.124672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.124712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.129113] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.129271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.129305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.133710] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.133931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.133966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.138507] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.138887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.138941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.143996] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.144219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.144265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.148649] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.148804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.148838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.153358] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.153646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.153681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.157877] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.158033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.158074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.162548] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.162799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.162839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.167214] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.167380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.167421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.171911] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.172124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.172162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.176594] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.176700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.181405] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.181895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.181935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.186406] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.187015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.187066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.191330] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.191605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.191646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.195995] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.196126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.196159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.200757] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.200940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.200972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.205508] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.205697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.205729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.210411] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.210684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.210718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.215071] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.215381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.215424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.219725] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.219932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.219968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.224488] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.224709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.224747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.229296] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.229494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.229556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.234157] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.234323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.234369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.239226] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.794 [2024-07-15 13:20:36.239359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.794 [2024-07-15 13:20:36.239406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:23.794 [2024-07-15 13:20:36.244268] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.795 [2024-07-15 13:20:36.244755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.795 [2024-07-15 13:20:36.244827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:23.795 [2024-07-15 13:20:36.249524] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.795 [2024-07-15 13:20:36.249883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.795 [2024-07-15 13:20:36.249935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:23.795 [2024-07-15 13:20:36.255111] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:23.795 [2024-07-15 13:20:36.255527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:23.795 [2024-07-15 13:20:36.255579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.260599] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.260975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.261027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.266245] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.266589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.266650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.271124] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.271223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.271251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.276025] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.276285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.276321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.280706] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.280844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.280872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.285346] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.285440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.285473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.290079] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.290203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.290230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.294690] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.294809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.294836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.299498] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.299720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.299760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.304212] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.304338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.304363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.309123] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.054 [2024-07-15 13:20:36.309389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.054 [2024-07-15 13:20:36.309424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.054 [2024-07-15 13:20:36.313744] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.313900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.313925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.318385] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.318531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.318560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.323141] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.323452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.323484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.327926] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.328018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.328044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.332807] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.332905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.332931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.337704] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.337947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.337992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.342411] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.342599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.342625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.347284] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.347511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.347544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.352261] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.352532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.352565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.357024] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.357211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.357244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.361646] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.361808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.361839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.366254] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.366349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.366374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.370962] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.371076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.371107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.375650] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.375831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.375857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.380346] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.380459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.380490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.385031] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.385261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.385293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.389698] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.389907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.389935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.394285] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.394512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.394538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.399159] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.399254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.399279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.403729] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.403858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.403883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.408598] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.408694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.408721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.413335] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.413504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.413535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.417956] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.418111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.418138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.422713] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.422948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.422974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.427272] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.427530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.427561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.431869] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.432055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.432081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.436787] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.436925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.436950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.441666] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.441802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.441828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.446425] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.055 [2024-07-15 13:20:36.446524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.055 [2024-07-15 13:20:36.446556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.055 [2024-07-15 13:20:36.451162] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.451350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.451380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.455944] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.456164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.456188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.460725] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.460956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.460988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.465306] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.465416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.465443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.469958] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.470103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.470134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.474606] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.474744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.474797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.479296] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.479390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.479415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.483993] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.484126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.484156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.488704] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.488891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.488918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.493378] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.493636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.493680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.498256] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.498519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.498563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.502753] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.503095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.503140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.507318] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.507490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.507531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.512234] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.512377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.512409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.517008] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.517128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.517158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.056 [2024-07-15 13:20:36.521625] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.056 [2024-07-15 13:20:36.521717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.056 [2024-07-15 13:20:36.521742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.526307] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.526464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.526496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.531029] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.531214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.531246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.535970] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.536169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.536200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.540647] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.540820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.540851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.545500] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.545635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.545661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.550209] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.550352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.550378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.554872] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.554984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.555014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.559593] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.559708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.559732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.564374] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.564538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.564569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.569188] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.569343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.569373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.574118] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.574316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.574341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.578839] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.578964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.316 [2024-07-15 13:20:36.578988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.316 [2024-07-15 13:20:36.583436] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.316 [2024-07-15 13:20:36.583563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.583589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.588219] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.588350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.588377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.592981] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.593080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.593105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.597565] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.597660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.597685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.602439] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.602616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.602646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.607223] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.607359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.607384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.611920] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.612140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.612169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.616613] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.616729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.616755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.621299] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.621433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.621459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.626064] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.626217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.626242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.630848] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.630967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.630992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.635795] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.635897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.635923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.640792] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.640953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.640985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.645461] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.645610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.645641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.650622] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.650848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.650880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.655453] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.655842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.655894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.660138] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.660337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.660388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.664947] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.665198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.665247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.669560] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.669700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.669736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.317 [2024-07-15 13:20:36.674936] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.317 [2024-07-15 13:20:36.675189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.317 [2024-07-15 13:20:36.675224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.679701] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.679894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.679936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.684478] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.684585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.684617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.689360] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.689494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.689521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.694007] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.694176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.694203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.698559] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.698674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.698705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.703392] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.703488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.703515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.708234] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.708426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.708455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.713147] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.713259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.713286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.718160] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.718350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.718379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.724205] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.724365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.724392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.728790] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.728946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.728971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.733588] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.733783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.733818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.738336] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.738466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.738491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.742987] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.743112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.743141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.747897] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.748071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.748095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.752535] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.752655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.752679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.757492] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.757631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.757656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.762131] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.762302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.762326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.767045] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.767175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.318 [2024-07-15 13:20:36.767206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.318 [2024-07-15 13:20:36.771777] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.318 [2024-07-15 13:20:36.771947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.319 [2024-07-15 13:20:36.771972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.319 [2024-07-15 13:20:36.776718] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.319 [2024-07-15 13:20:36.776874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.319 [2024-07-15 13:20:36.776899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.319 [2024-07-15 13:20:36.781495] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.319 [2024-07-15 13:20:36.781596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.319 [2024-07-15 13:20:36.781620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.577 [2024-07-15 13:20:36.786257] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.577 [2024-07-15 13:20:36.786414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.577 [2024-07-15 13:20:36.786446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.577 [2024-07-15 13:20:36.790991] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.577 [2024-07-15 13:20:36.791101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.791126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.795742] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.795900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.795929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.800632] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.800825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.800866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.805474] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.805666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.805700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.810243] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.810432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.810459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.814880] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.815070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.815113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.819502] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.819615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.819642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.824465] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.824575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.824600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.829344] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.829473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.829499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.834313] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.834476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.834501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.839086] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.839258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.839283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.843836] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.843932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.843957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.848629] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.848822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.848854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.853336] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.853456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.853484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.858034] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.858169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.858196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.862787] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.862986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.863021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.867576] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.867685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.867712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.872412] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.872568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.872594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.877146] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.877276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.877301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.881783] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.881884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.881910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.886568] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.886738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.886775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.891250] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.891363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.891387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.895948] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.896100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.896125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.900735] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.900914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.900939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.905358] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.905490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.905515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.910200] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.910414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.910444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.915156] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.915288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.915319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.920046] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.920177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.920208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.578 [2024-07-15 13:20:36.924881] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.578 [2024-07-15 13:20:36.925075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.578 [2024-07-15 13:20:36.925102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.929839] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.929940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.929973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.934578] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.934721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.934746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.939454] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.939683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.939709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.944513] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.944613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.944644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.949355] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.949533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.949565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.954481] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.954615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.954647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.959249] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.959355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.959386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.964519] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.964681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.964706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.969679] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.969796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.969824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.974625] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.974789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.974819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.979667] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.979861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.979888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.984523] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.984635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.984660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.989511] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.989692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.989724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.994334] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.994488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.994518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:36.999180] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:36.999274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:36.999300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:37.003971] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:37.004133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:37.004159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:37.008713] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:37.008860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:37.008895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:37.013576] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:37.013730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:37.013776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:37.018320] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:37.018517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:37.018560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:37.023343] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:37.023469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:37.023497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:37.028340] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:37.028552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:37.028587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:37.033424] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:37.033592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:37.033618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:37.038132] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:37.038245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:37.038271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.579 [2024-07-15 13:20:37.042955] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.579 [2024-07-15 13:20:37.043134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.579 [2024-07-15 13:20:37.043159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.047705] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.047815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.047841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.052526] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.052732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.052783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.057372] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.057544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.057576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.062175] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.062366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.062418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.067056] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.067229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.067261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.071963] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.072128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.072170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.077044] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.077182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.077216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.082779] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.082989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.087529] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.087649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.087694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.092505] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.092656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.092687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.097356] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.097540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.097572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.102146] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.102301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.102344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.106943] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.107342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.107393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.111863] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.112362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.112433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.116909] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.117181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.117225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.121581] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.121810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.121857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.126454] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.126715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.126782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.131333] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.131494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.131534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.136146] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.136264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.136299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.140904] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.141083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.141120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.145697] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.145861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.145896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.150363] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.150483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.150520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.155334] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.155561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.155593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.160172] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.160431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.841 [2024-07-15 13:20:37.160477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.841 [2024-07-15 13:20:37.164961] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.841 [2024-07-15 13:20:37.165175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.165207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.169744] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.169908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.169940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.174592] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.174708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.174739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.179437] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.179613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.179643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.184262] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.184377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.184411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.189026] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.189141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.189169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.193784] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.193987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.194020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.198426] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.198604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.198637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.203204] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.203397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.203430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.207920] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.208038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.208066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.212742] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.212914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.212952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.217730] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.217935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.217974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.222414] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.222516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.222543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.227227] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.227451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.227488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.232348] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.232546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.232583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.237072] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.237298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.237333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.242216] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.242458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.242494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.247338] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.247528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.247561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.252564] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.252760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.252829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.257643] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.257853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.257887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.264374] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.264496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.264538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.269583] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.269739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.269780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.275932] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.276137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.276163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.280822] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.281007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.281039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.285630] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.285849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.285878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.290522] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.290713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.290745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.295534] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.295681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.295717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.300467] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.300667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.300699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:24.842 [2024-07-15 13:20:37.305043] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:24.842 [2024-07-15 13:20:37.305164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.842 [2024-07-15 13:20:37.305197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.102 [2024-07-15 13:20:37.309972] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.102 [2024-07-15 13:20:37.310117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.102 [2024-07-15 13:20:37.310149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.102 [2024-07-15 13:20:37.314865] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.315087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.315119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.319844] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.320005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.320054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.324686] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.324939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.324973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.329578] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.329699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.329731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.334172] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.334301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.334327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.338873] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.339093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.339122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.343519] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.343920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.343971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.348067] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.348234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.348278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.352759] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.352950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.352984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.357432] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.357553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.357588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.362210] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.362327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.362359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.367024] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.367162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.367191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.371696] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.371817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.371843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.376595] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.376757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.376800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.381362] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.381491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.381516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.386014] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.386127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.386152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.390876] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.391125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.391160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.395634] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.395899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.395939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.400657] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.400902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.400931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.405339] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.405549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.405573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.409946] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.410040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.410065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.414637] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.414835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.414861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.419608] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.419760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.419799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.425135] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.425244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.425269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.429933] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.430176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.430209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.434814] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.435010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.435036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.439586] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.439833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.439859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.103 [2024-07-15 13:20:37.444540] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.103 [2024-07-15 13:20:37.444705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.103 [2024-07-15 13:20:37.444732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.449546] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.449659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.449686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.454399] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.454569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.454598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.459063] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.459198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.459232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.463636] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.463786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.463814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.468491] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.468733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.468780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.473386] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.473600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.473639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.478141] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.478360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.478392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.482954] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.483074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.483099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.487930] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.488043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.488069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.492669] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.492873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.492902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.497425] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.497526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.497557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.502319] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.502436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.502462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.507396] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.507600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.507627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.512257] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.512473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.512506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.516999] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.517199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.517236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.521924] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.522048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.522074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.526703] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.526833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.526858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.531539] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.531717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.531750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.536288] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.536413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.536438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.540911] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.541035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.541060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.545777] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.545995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.546026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.550373] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.104 [2024-07-15 13:20:37.550572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.104 [2024-07-15 13:20:37.550602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.104 [2024-07-15 13:20:37.555149] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.105 [2024-07-15 13:20:37.555363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.105 [2024-07-15 13:20:37.555388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.105 [2024-07-15 13:20:37.559809] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.105 [2024-07-15 13:20:37.560003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.105 [2024-07-15 13:20:37.560028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.105 [2024-07-15 13:20:37.564437] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.105 [2024-07-15 13:20:37.564534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.105 [2024-07-15 13:20:37.564559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.105 [2024-07-15 13:20:37.569277] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.105 [2024-07-15 13:20:37.569444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.105 [2024-07-15 13:20:37.569472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.573911] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.574035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.574061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.578623] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.578743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.578769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.583552] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.583758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.583799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.588241] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.588411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.588436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.593082] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.593309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.593337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.598050] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.598161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.598187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.602838] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.602955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.602987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.607565] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.607740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.607778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.612366] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.612484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.612509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.617129] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.617261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.617294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.622034] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.622257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.622297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.626718] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.626923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.626967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.631512] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.631740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.631794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.636293] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.636446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.636499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.641016] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.641131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.641167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.645703] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.645926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.645956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.650423] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.650549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.650580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.655067] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.655213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.655259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.659759] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.364 [2024-07-15 13:20:37.660045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.364 [2024-07-15 13:20:37.660081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.364 [2024-07-15 13:20:37.664471] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.664722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.664757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.669047] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.669162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.669189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.674047] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.674200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.674235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.678970] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.679097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.679127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.684113] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.684308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.684339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.689415] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.689560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.689590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.694551] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.694664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.694696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.700021] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.700251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.700289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.704992] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.705372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.705424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.709677] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.709852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.709884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.715107] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.715347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.715380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.721990] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.722151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.722186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.729607] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.729906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.729937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.736976] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.737164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.737213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.743983] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.744156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.744203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.751405] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.751607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.751636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.758797] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.758967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.758998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.766078] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.766232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.766263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.773389] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.773664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.773695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.778932] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.779162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.779198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.784846] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.785124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.785167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.791462] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.791625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.791669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.796339] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.796459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.796486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.802555] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.802787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.802817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.808156] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.808295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.808328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.814448] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.814597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.814627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.819966] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.820230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.820268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.825215] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.825389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.825417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.365 [2024-07-15 13:20:37.830717] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.365 [2024-07-15 13:20:37.831019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.365 [2024-07-15 13:20:37.831063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.836001] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.836194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.836240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.842566] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.842724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.842780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.849399] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.849604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.849632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.855885] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.856036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.856068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.862217] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.862354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.862387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.868531] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.868785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.868822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.874750] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.874938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.874968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.881917] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.882180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.882212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.889045] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.889188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.889220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.896140] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.896298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.896327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.903504] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.903726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.903757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.910801] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.910968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.910999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.918111] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.918289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.918324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.925267] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.925515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.925547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.932162] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.932324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.932354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.939170] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.939415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.939450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.946166] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.946299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.946333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.625 [2024-07-15 13:20:37.953138] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.625 [2024-07-15 13:20:37.953275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.625 [2024-07-15 13:20:37.953305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:37.960199] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:37.960399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:37.960430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:37.967177] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:37.967352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:37.967388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:37.973660] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:37.973822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:37.973854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:37.980448] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:37.980716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:37.980746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:37.987133] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:37.987297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:37.987326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:37.994066] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:37.994330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:37.994359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:38.001131] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:38.001315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:38.001346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:38.008253] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:38.008406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:38.008435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:38.015257] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:38.015488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:38.015522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.626 [2024-07-15 13:20:38.023750] tcp.c:2164:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x121c910) with pdu=0x2000190fef90 00:36:25.626 [2024-07-15 13:20:38.023992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.626 [2024-07-15 13:20:38.024021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.626 00:36:25.626 Latency(us) 00:36:25.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.626 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:25.626 nvme0n1 : 2.00 6127.79 765.97 0.00 0.00 2604.35 1817.13 10009.13 00:36:25.626 =================================================================================================================== 00:36:25.626 Total : 6127.79 765.97 0.00 0.00 2604.35 1817.13 10009.13 00:36:25.626 0 00:36:25.626 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:25.626 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:25.626 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:25.626 | .driver_specific 00:36:25.626 | .nvme_error 00:36:25.626 | .status_code 00:36:25.626 | .command_transient_transport_error' 00:36:25.626 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 395 > 0 )) 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 125007 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 125007 ']' 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 125007 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125007 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125007' 00:36:26.203 killing process with pid 125007 00:36:26.203 Received shutdown signal, test time was about 2.000000 seconds 00:36:26.203 00:36:26.203 Latency(us) 00:36:26.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.203 =================================================================================================================== 00:36:26.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 125007 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 125007 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 124742 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 124742 ']' 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 124742 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124742 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:26.203 killing process with pid 124742 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124742' 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 124742 00:36:26.203 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 124742 00:36:26.462 00:36:26.462 real 0m16.783s 00:36:26.462 user 0m31.291s 00:36:26.462 sys 0m6.166s 00:36:26.462 ************************************ 00:36:26.462 END TEST nvmf_digest_error 00:36:26.462 ************************************ 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@492 -- # nvmfcleanup 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:26.462 rmmod nvme_tcp 00:36:26.462 rmmod nvme_fabrics 00:36:26.462 rmmod nvme_keyring 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@493 -- # '[' -n 124742 ']' 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@494 -- # killprocess 124742 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 124742 ']' 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 124742 00:36:26.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (124742) - No such process 00:36:26.462 Process with pid 124742 is not found 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 124742 is not found' 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@282 -- # remove_spdk_ns 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:26.462 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:36:26.721 00:36:26.721 real 0m35.458s 00:36:26.721 user 1m3.967s 00:36:26.721 sys 0m12.536s 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:26.721 ************************************ 00:36:26.721 END TEST nvmf_digest 00:36:26.721 ************************************ 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@115 -- # [[ tcp == \t\c\p ]] 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@117 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:26.721 13:20:38 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:26.721 ************************************ 00:36:26.721 START TEST nvmf_mdns_discovery 00:36:26.721 ************************************ 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:36:26.721 * Looking for test storage... 00:36:26.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:36:26.721 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@452 -- # prepare_net_devs 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # local -g is_hw=no 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # remove_spdk_ns 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@436 -- # nvmf_veth_init 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:36:26.722 Cannot find device "nvmf_tgt_br" 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:36:26.722 Cannot find device "nvmf_tgt_br2" 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # true 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:36:26.722 Cannot find device "nvmf_tgt_br" 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:36:26.722 Cannot find device "nvmf_tgt_br2" 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:36:26.722 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:26.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:26.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:36:26.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:36:26.981 00:36:26.981 --- 10.0.0.2 ping statistics --- 00:36:26.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.981 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:36:26.981 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:26.981 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:36:26.981 00:36:26.981 --- 10.0.0.3 ping statistics --- 00:36:26.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.981 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:26.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:36:26.981 00:36:26.981 --- 10.0.0.1 ping statistics --- 00:36:26.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.981 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@437 -- # return 0 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@485 -- # nvmfpid=125295 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@486 -- # waitforlisten 125295 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 --wait-for-rpc 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 125295 ']' 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:26.981 13:20:39 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:27.241 [2024-07-15 13:20:39.515746] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:27.241 [2024-07-15 13:20:39.517313] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:27.241 [2024-07-15 13:20:39.517403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:27.241 [2024-07-15 13:20:39.654865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.500 [2024-07-15 13:20:39.715912] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:27.500 [2024-07-15 13:20:39.715971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:27.500 [2024-07-15 13:20:39.715983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:27.500 [2024-07-15 13:20:39.715991] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:27.500 [2024-07-15 13:20:39.715998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:27.500 [2024-07-15 13:20:39.716023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.500 [2024-07-15 13:20:39.716336] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.072 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 [2024-07-15 13:20:40.576777] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 [2024-07-15 13:20:40.588719] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 [2024-07-15 13:20:40.596816] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 null0 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 null1 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 null2 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 null3 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=125341 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 125341 /tmp/host.sock 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 125341 ']' 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:28.330 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:36:28.330 [2024-07-15 13:20:40.698531] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:28.330 [2024-07-15 13:20:40.698627] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125341 ] 00:36:28.588 [2024-07-15 13:20:40.834126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.588 [2024-07-15 13:20:40.894523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:36:28.588 Failed to kill daemon: No such file or directory 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # : 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=125355 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:36:28.588 13:20:40 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:36:28.588 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:36:28.588 Successfully dropped root privileges. 00:36:28.588 avahi-daemon 0.8 starting up. 00:36:28.588 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:36:28.588 Successfully called chroot(). 00:36:28.588 Successfully dropped remaining capabilities. 00:36:28.588 No service file found in /etc/avahi/services. 00:36:28.588 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:36:28.588 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:36:28.588 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:36:28.588 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:36:28.588 Network interface enumeration completed. 00:36:28.588 Registering new address record for fe80::c01:75ff:febe:4eac on nvmf_tgt_if2.*. 00:36:28.588 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:36:28.588 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:36:28.588 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:36:29.521 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 1513604568. 00:36:29.521 13:20:41 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:36:29.521 13:20:41 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.521 13:20:41 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:29.779 13:20:41 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.779 13:20:41 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:36:29.779 13:20:41 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.779 13:20:41 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:36:29.779 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:29.780 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.038 [2024-07-15 13:20:42.302297] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:30.038 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 [2024-07-15 13:20:42.380863] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 [2024-07-15 13:20:42.424700] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 [2024-07-15 13:20:42.432631] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.039 13:20:42 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:36:30.973 [2024-07-15 13:20:43.202303] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:36:31.540 [2024-07-15 13:20:43.802318] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:36:31.540 [2024-07-15 13:20:43.802362] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:36:31.540 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:31.540 cookie is 0 00:36:31.540 is_local: 1 00:36:31.540 our_own: 0 00:36:31.540 wide_area: 0 00:36:31.540 multicast: 1 00:36:31.540 cached: 1 00:36:31.540 [2024-07-15 13:20:43.902302] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:36:31.540 [2024-07-15 13:20:43.902344] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:36:31.540 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:31.540 cookie is 0 00:36:31.540 is_local: 1 00:36:31.540 our_own: 0 00:36:31.540 wide_area: 0 00:36:31.540 multicast: 1 00:36:31.540 cached: 1 00:36:31.540 [2024-07-15 13:20:43.902359] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:36:31.540 [2024-07-15 13:20:44.002306] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:36:31.540 [2024-07-15 13:20:44.002342] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:36:31.540 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:31.540 cookie is 0 00:36:31.540 is_local: 1 00:36:31.540 our_own: 0 00:36:31.540 wide_area: 0 00:36:31.540 multicast: 1 00:36:31.540 cached: 1 00:36:31.798 [2024-07-15 13:20:44.102306] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:36:31.798 [2024-07-15 13:20:44.102349] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:36:31.798 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:31.798 cookie is 0 00:36:31.798 is_local: 1 00:36:31.798 our_own: 0 00:36:31.798 wide_area: 0 00:36:31.798 multicast: 1 00:36:31.798 cached: 1 00:36:31.798 [2024-07-15 13:20:44.102364] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:36:32.363 [2024-07-15 13:20:44.806472] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:36:32.363 [2024-07-15 13:20:44.806514] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:36:32.363 [2024-07-15 13:20:44.806535] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:36:32.620 [2024-07-15 13:20:44.892599] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:36:32.620 [2024-07-15 13:20:44.949549] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:36:32.620 [2024-07-15 13:20:44.949597] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:36:32.620 [2024-07-15 13:20:45.005758] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:32.620 [2024-07-15 13:20:45.005807] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:32.620 [2024-07-15 13:20:45.005828] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:32.878 [2024-07-15 13:20:45.093900] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:36:32.878 [2024-07-15 13:20:45.150169] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:36:32.878 [2024-07-15 13:20:45.150220] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:35.420 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.421 13:20:47 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:36.796 [2024-07-15 13:20:48.956705] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:36.796 [2024-07-15 13:20:48.957271] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:36:36.796 [2024-07-15 13:20:48.957329] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:36:36.796 [2024-07-15 13:20:48.957368] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:36.796 [2024-07-15 13:20:48.957383] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:36.796 [2024-07-15 13:20:48.968703] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:36:36.796 [2024-07-15 13:20:48.969319] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:36:36.796 [2024-07-15 13:20:48.969392] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.796 13:20:48 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:36:36.796 [2024-07-15 13:20:49.099389] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:36:36.796 [2024-07-15 13:20:49.100373] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:36:36.796 [2024-07-15 13:20:49.163623] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:36:36.796 [2024-07-15 13:20:49.163681] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:36.796 [2024-07-15 13:20:49.163690] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:36.796 [2024-07-15 13:20:49.163712] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:36.796 [2024-07-15 13:20:49.163760] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:36:36.796 [2024-07-15 13:20:49.163784] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:36:36.796 [2024-07-15 13:20:49.163791] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:36:36.796 [2024-07-15 13:20:49.163806] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:36:36.796 [2024-07-15 13:20:49.209537] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:36:36.796 [2024-07-15 13:20:49.209581] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:36:36.796 [2024-07-15 13:20:49.209641] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:36.796 [2024-07-15 13:20:49.209652] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:37.730 13:20:49 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:36:37.730 13:20:49 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:37.730 13:20:49 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:37.730 13:20:49 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.730 13:20:49 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:37.730 13:20:49 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:37.730 13:20:49 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:37.730 13:20:49 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.730 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:36:37.730 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:36:37.730 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:37.730 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:37.730 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:37.730 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.730 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:37.731 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:37.990 [2024-07-15 13:20:50.301184] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:36:37.990 [2024-07-15 13:20:50.301227] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:36:37.990 [2024-07-15 13:20:50.301265] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:37.990 [2024-07-15 13:20:50.301279] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:37.990 [2024-07-15 13:20:50.302361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.990 [2024-07-15 13:20:50.302403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.990 [2024-07-15 13:20:50.302417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.990 [2024-07-15 13:20:50.302428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.990 [2024-07-15 13:20:50.302438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.990 [2024-07-15 13:20:50.302447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.990 [2024-07-15 13:20:50.302456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.990 [2024-07-15 13:20:50.302465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.990 [2024-07-15 13:20:50.302475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:37.990 [2024-07-15 13:20:50.309223] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:36:37.990 [2024-07-15 13:20:50.309290] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:37.990 [2024-07-15 13:20:50.310363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.990 [2024-07-15 13:20:50.310402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.990 [2024-07-15 13:20:50.310416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.990 [2024-07-15 13:20:50.310426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.990 [2024-07-15 13:20:50.310436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.990 [2024-07-15 13:20:50.310445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.990 [2024-07-15 13:20:50.310455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:37.990 [2024-07-15 13:20:50.310464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:37.990 [2024-07-15 13:20:50.310473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.990 [2024-07-15 13:20:50.312320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.990 13:20:50 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:36:37.990 [2024-07-15 13:20:50.320320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.990 [2024-07-15 13:20:50.322339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.990 [2024-07-15 13:20:50.322464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.990 [2024-07-15 13:20:50.322488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.991 [2024-07-15 13:20:50.322499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.322518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.322544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.322555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.322566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.991 [2024-07-15 13:20:50.322582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.330334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.991 [2024-07-15 13:20:50.330426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.330448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.991 [2024-07-15 13:20:50.330459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.330475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.330489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.330497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.330506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.991 [2024-07-15 13:20:50.330521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.332398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.991 [2024-07-15 13:20:50.332482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.332503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.991 [2024-07-15 13:20:50.332514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.332529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.332543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.332551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.332565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.991 [2024-07-15 13:20:50.332580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.340402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.991 [2024-07-15 13:20:50.340580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.340605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.991 [2024-07-15 13:20:50.340617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.340636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.340651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.340661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.340671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.991 [2024-07-15 13:20:50.340687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.342451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.991 [2024-07-15 13:20:50.342532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.342553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.991 [2024-07-15 13:20:50.342563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.342578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.342607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.342617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.342626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.991 [2024-07-15 13:20:50.342640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.350499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.991 [2024-07-15 13:20:50.350608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.350630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.991 [2024-07-15 13:20:50.350641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.350659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.350685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.350695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.350704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.991 [2024-07-15 13:20:50.350719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.352501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.991 [2024-07-15 13:20:50.352583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.352604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.991 [2024-07-15 13:20:50.352614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.352630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.352644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.352653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.352662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.991 [2024-07-15 13:20:50.352676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.360563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.991 [2024-07-15 13:20:50.360661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.360688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.991 [2024-07-15 13:20:50.360699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.360715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.360730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.360739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.360748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.991 [2024-07-15 13:20:50.360775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.362551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.991 [2024-07-15 13:20:50.362631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.362651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.991 [2024-07-15 13:20:50.362661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.362677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.362702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.362712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.362721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.991 [2024-07-15 13:20:50.362736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.370623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.991 [2024-07-15 13:20:50.370710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.370731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.991 [2024-07-15 13:20:50.370742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.370758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.370797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.370808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.370817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.991 [2024-07-15 13:20:50.370832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.372601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.991 [2024-07-15 13:20:50.372685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.372705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.991 [2024-07-15 13:20:50.372716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.991 [2024-07-15 13:20:50.372732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.991 [2024-07-15 13:20:50.372745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.991 [2024-07-15 13:20:50.372754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.991 [2024-07-15 13:20:50.372774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.991 [2024-07-15 13:20:50.372791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.991 [2024-07-15 13:20:50.380681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.991 [2024-07-15 13:20:50.380780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.991 [2024-07-15 13:20:50.380801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.992 [2024-07-15 13:20:50.380812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.380828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.380842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.380851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.380860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.992 [2024-07-15 13:20:50.380875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.382655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.992 [2024-07-15 13:20:50.382776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.382799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.992 [2024-07-15 13:20:50.382809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.382826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.382852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.382862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.382871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.992 [2024-07-15 13:20:50.382886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.390737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.992 [2024-07-15 13:20:50.390830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.390851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.992 [2024-07-15 13:20:50.390862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.390878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.390902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.390913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.390922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.992 [2024-07-15 13:20:50.390936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.392725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.992 [2024-07-15 13:20:50.392816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.392837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.992 [2024-07-15 13:20:50.392847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.392862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.392876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.392885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.392894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.992 [2024-07-15 13:20:50.392908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.400801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.992 [2024-07-15 13:20:50.400894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.400915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.992 [2024-07-15 13:20:50.400926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.400942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.400956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.400964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.400974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.992 [2024-07-15 13:20:50.400989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.402784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.992 [2024-07-15 13:20:50.402874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.402895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.992 [2024-07-15 13:20:50.402905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.402921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.402946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.402956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.402965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.992 [2024-07-15 13:20:50.402979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.410861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.992 [2024-07-15 13:20:50.410945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.410966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.992 [2024-07-15 13:20:50.410976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.410992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.411016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.411026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.411035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.992 [2024-07-15 13:20:50.411050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.412833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.992 [2024-07-15 13:20:50.412917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.412937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.992 [2024-07-15 13:20:50.412948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.412963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.412977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.412986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.412995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.992 [2024-07-15 13:20:50.413008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.420916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.992 [2024-07-15 13:20:50.421002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.421022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.992 [2024-07-15 13:20:50.421032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.421048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.421062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.421070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.421079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.992 [2024-07-15 13:20:50.421093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.422885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.992 [2024-07-15 13:20:50.422964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.422984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.992 [2024-07-15 13:20:50.422994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.423010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.423034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.423044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.423053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.992 [2024-07-15 13:20:50.423067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.430972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:36:37.992 [2024-07-15 13:20:50.431054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.431074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac5140 with addr=10.0.0.3, port=4420 00:36:37.992 [2024-07-15 13:20:50.431085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac5140 is same with the state(5) to be set 00:36:37.992 [2024-07-15 13:20:50.431110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5140 (9): Bad file descriptor 00:36:37.992 [2024-07-15 13:20:50.431126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:36:37.992 [2024-07-15 13:20:50.431135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:36:37.992 [2024-07-15 13:20:50.431143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:36:37.992 [2024-07-15 13:20:50.431158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.992 [2024-07-15 13:20:50.432935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:37.992 [2024-07-15 13:20:50.433016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.992 [2024-07-15 13:20:50.433036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0be70 with addr=10.0.0.2, port=4420 00:36:37.993 [2024-07-15 13:20:50.433046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0be70 is same with the state(5) to be set 00:36:37.993 [2024-07-15 13:20:50.433062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0be70 (9): Bad file descriptor 00:36:37.993 [2024-07-15 13:20:50.433076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:37.993 [2024-07-15 13:20:50.433084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:37.993 [2024-07-15 13:20:50.433093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:37.993 [2024-07-15 13:20:50.433107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.993 [2024-07-15 13:20:50.439283] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:36:37.993 [2024-07-15 13:20:50.439314] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:36:37.993 [2024-07-15 13:20:50.439342] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:36:37.993 [2024-07-15 13:20:50.440304] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:36:37.993 [2024-07-15 13:20:50.440334] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:37.993 [2024-07-15 13:20:50.440354] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:38.250 [2024-07-15 13:20:50.525393] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:36:38.250 [2024-07-15 13:20:50.526379] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:39.182 [2024-07-15 13:20:51.602304] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.182 13:20:51 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:40.554 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:40.555 [2024-07-15 13:20:52.855710] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:36:40.555 2024/07/15 13:20:52 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:36:40.555 request: 00:36:40.555 { 00:36:40.555 "method": "bdev_nvme_start_mdns_discovery", 00:36:40.555 "params": { 00:36:40.555 "name": "mdns", 00:36:40.555 "svcname": "_nvme-disc._http", 00:36:40.555 "hostnqn": "nqn.2021-12.io.spdk:test" 00:36:40.555 } 00:36:40.555 } 00:36:40.555 Got JSON-RPC error response 00:36:40.555 GoRPCClient: error on JSON-RPC call 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:40.555 13:20:52 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:36:41.134 [2024-07-15 13:20:53.444367] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:36:41.134 [2024-07-15 13:20:53.544360] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:36:41.416 [2024-07-15 13:20:53.644370] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:36:41.416 [2024-07-15 13:20:53.644419] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:36:41.416 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:41.416 cookie is 0 00:36:41.416 is_local: 1 00:36:41.416 our_own: 0 00:36:41.416 wide_area: 0 00:36:41.416 multicast: 1 00:36:41.416 cached: 1 00:36:41.416 [2024-07-15 13:20:53.744371] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:36:41.416 [2024-07-15 13:20:53.744425] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:36:41.416 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:41.416 cookie is 0 00:36:41.416 is_local: 1 00:36:41.416 our_own: 0 00:36:41.416 wide_area: 0 00:36:41.416 multicast: 1 00:36:41.416 cached: 1 00:36:41.416 [2024-07-15 13:20:53.744441] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:36:41.416 [2024-07-15 13:20:53.844369] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:36:41.416 [2024-07-15 13:20:53.844416] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:36:41.416 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:41.416 cookie is 0 00:36:41.416 is_local: 1 00:36:41.416 our_own: 0 00:36:41.416 wide_area: 0 00:36:41.416 multicast: 1 00:36:41.416 cached: 1 00:36:41.673 [2024-07-15 13:20:53.944373] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:36:41.673 [2024-07-15 13:20:53.944422] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:36:41.673 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:41.673 cookie is 0 00:36:41.673 is_local: 1 00:36:41.673 our_own: 0 00:36:41.673 wide_area: 0 00:36:41.673 multicast: 1 00:36:41.673 cached: 1 00:36:41.673 [2024-07-15 13:20:53.944439] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:36:42.245 [2024-07-15 13:20:54.648252] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:36:42.245 [2024-07-15 13:20:54.648297] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:36:42.245 [2024-07-15 13:20:54.648318] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:36:42.503 [2024-07-15 13:20:54.734393] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:36:42.503 [2024-07-15 13:20:54.794741] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:36:42.503 [2024-07-15 13:20:54.794803] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:36:42.503 [2024-07-15 13:20:54.848033] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:42.503 [2024-07-15 13:20:54.848073] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:42.503 [2024-07-15 13:20:54.848093] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:42.503 [2024-07-15 13:20:54.934164] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:36:42.762 [2024-07-15 13:20:54.994639] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:36:42.762 [2024-07-15 13:20:54.994698] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.040 13:20:57 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.040 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.041 [2024-07-15 13:20:58.038734] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:36:46.041 2024/07/15 13:20:58 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:36:46.041 request: 00:36:46.041 { 00:36:46.041 "method": "bdev_nvme_start_mdns_discovery", 00:36:46.041 "params": { 00:36:46.041 "name": "cdc", 00:36:46.041 "svcname": "_nvme-disc._tcp", 00:36:46.041 "hostnqn": "nqn.2021-12.io.spdk:test" 00:36:46.041 } 00:36:46.041 } 00:36:46.041 Got JSON-RPC error response 00:36:46.041 GoRPCClient: error on JSON-RPC call 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 125341 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 125341 00:36:46.041 [2024-07-15 13:20:58.244355] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 125355 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # nvmfcleanup 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:36:46.041 Got SIGTERM, quitting. 00:36:46.041 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:36:46.041 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:36:46.041 avahi-daemon 0.8 exiting. 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:46.041 rmmod nvme_tcp 00:36:46.041 rmmod nvme_fabrics 00:36:46.041 rmmod nvme_keyring 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # '[' -n 125295 ']' 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@494 -- # killprocess 125295 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 125295 ']' 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 125295 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125295 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:46.041 killing process with pid 125295 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125295' 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 125295 00:36:46.041 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 125295 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@282 -- # remove_spdk_ns 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:36:46.300 00:36:46.300 real 0m19.692s 00:36:46.300 user 0m32.984s 00:36:46.300 sys 0m4.956s 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.300 ************************************ 00:36:46.300 END TEST nvmf_mdns_discovery 00:36:46.300 ************************************ 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@120 -- # [[ 1 -eq 1 ]] 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@121 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:46.300 ************************************ 00:36:46.300 START TEST nvmf_host_multipath 00:36:46.300 ************************************ 00:36:46.300 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:36:46.559 * Looking for test storage... 00:36:46.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@452 -- # prepare_net_devs 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@414 -- # local -g is_hw=no 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@416 -- # remove_spdk_ns 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@436 -- # nvmf_veth_init 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:36:46.559 Cannot find device "nvmf_tgt_br" 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:36:46.559 Cannot find device "nvmf_tgt_br2" 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@160 -- # true 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:36:46.559 Cannot find device "nvmf_tgt_br" 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:36:46.559 Cannot find device "nvmf_tgt_br2" 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:46.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:46.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:36:46.559 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:46.560 13:20:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:46.560 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:46.560 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:36:46.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:46.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:36:46.818 00:36:46.818 --- 10.0.0.2 ping statistics --- 00:36:46.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.818 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:36:46.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:46.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:36:46.818 00:36:46.818 --- 10.0.0.3 ping statistics --- 00:36:46.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.818 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:46.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:46.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:36:46.818 00:36:46.818 --- 10.0.0.1 ping statistics --- 00:36:46.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.818 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@437 -- # return 0 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@485 -- # nvmfpid=125906 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@486 -- # waitforlisten 125906 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 125906 ']' 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:46.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:46.818 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:46.818 [2024-07-15 13:20:59.249110] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:46.818 [2024-07-15 13:20:59.250298] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:36:46.818 [2024-07-15 13:20:59.250365] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:47.079 [2024-07-15 13:20:59.387982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:47.079 [2024-07-15 13:20:59.457516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:47.079 [2024-07-15 13:20:59.457722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:47.080 [2024-07-15 13:20:59.457910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:47.080 [2024-07-15 13:20:59.458102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:47.080 [2024-07-15 13:20:59.458317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:47.080 [2024-07-15 13:20:59.458473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.080 [2024-07-15 13:20:59.458485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.080 [2024-07-15 13:20:59.511566] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:47.080 [2024-07-15 13:20:59.512045] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:47.080 [2024-07-15 13:20:59.512255] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:47.342 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:47.342 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:36:47.342 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:36:47.342 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:47.342 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:47.342 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.342 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=125906 00:36:47.342 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:47.599 [2024-07-15 13:20:59.855491] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.599 13:20:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:47.857 Malloc0 00:36:47.857 13:21:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:36:48.115 13:21:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:48.372 13:21:00 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:48.631 [2024-07-15 13:21:01.011492] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.631 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:48.890 [2024-07-15 13:21:01.243498] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:48.890 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=125990 00:36:48.890 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:36:48.891 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:48.891 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 125990 /var/tmp/bdevperf.sock 00:36:48.891 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 125990 ']' 00:36:48.891 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:48.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:48.891 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:48.891 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:48.891 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:48.891 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:49.149 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:49.149 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:36:49.149 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:36:49.409 13:21:01 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:36:49.975 Nvme0n1 00:36:49.975 13:21:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:36:50.233 Nvme0n1 00:36:50.233 13:21:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:36:50.233 13:21:02 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:36:51.167 13:21:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:36:51.167 13:21:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:51.424 13:21:03 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:51.680 13:21:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:36:51.680 13:21:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=126064 00:36:51.680 13:21:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:36:51.680 13:21:04 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 125906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:36:58.335 Attaching 4 probes... 00:36:58.335 @path[10.0.0.2, 4421]: 17004 00:36:58.335 @path[10.0.0.2, 4421]: 17672 00:36:58.335 @path[10.0.0.2, 4421]: 17869 00:36:58.335 @path[10.0.0.2, 4421]: 17744 00:36:58.335 @path[10.0.0.2, 4421]: 17370 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 126064 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:58.335 13:21:10 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:58.594 13:21:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:36:58.594 13:21:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=126195 00:36:58.594 13:21:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:36:58.594 13:21:11 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 125906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:05.144 Attaching 4 probes... 00:37:05.144 @path[10.0.0.2, 4420]: 16912 00:37:05.144 @path[10.0.0.2, 4420]: 17106 00:37:05.144 @path[10.0.0.2, 4420]: 16029 00:37:05.144 @path[10.0.0.2, 4420]: 16180 00:37:05.144 @path[10.0.0.2, 4420]: 16663 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 126195 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:37:05.144 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:37:05.401 13:21:17 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:05.657 13:21:18 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:37:05.657 13:21:18 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=126325 00:37:05.657 13:21:18 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 125906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:05.657 13:21:18 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:12.209 Attaching 4 probes... 00:37:12.209 @path[10.0.0.2, 4421]: 16635 00:37:12.209 @path[10.0.0.2, 4421]: 16302 00:37:12.209 @path[10.0.0.2, 4421]: 17536 00:37:12.209 @path[10.0.0.2, 4421]: 17139 00:37:12.209 @path[10.0.0.2, 4421]: 17575 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 126325 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:37:12.209 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:12.467 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:37:12.467 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=126446 00:37:12.467 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 125906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:12.467 13:21:24 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:19.024 13:21:30 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:19.024 13:21:30 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:19.024 Attaching 4 probes... 00:37:19.024 00:37:19.024 00:37:19.024 00:37:19.024 00:37:19.024 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 126446 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:19.024 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:19.589 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:37:19.589 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=126576 00:37:19.589 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 125906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:19.589 13:21:31 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:26.149 13:21:37 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:26.149 13:21:37 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:26.149 Attaching 4 probes... 00:37:26.149 @path[10.0.0.2, 4421]: 16970 00:37:26.149 @path[10.0.0.2, 4421]: 17235 00:37:26.149 @path[10.0.0.2, 4421]: 17054 00:37:26.149 @path[10.0.0.2, 4421]: 17274 00:37:26.149 @path[10.0.0.2, 4421]: 16691 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 126576 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:26.149 [2024-07-15 13:21:38.338109] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338170] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338182] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338190] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338199] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338207] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338215] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338223] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338232] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338240] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338249] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338263] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338271] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338279] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338287] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338295] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338303] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338311] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338319] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338327] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 [2024-07-15 13:21:38.338335] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1741c50 is same with the state(5) to be set 00:37:26.149 13:21:38 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:37:27.083 13:21:39 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:37:27.083 13:21:39 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 125906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:27.083 13:21:39 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=126707 00:37:27.083 13:21:39 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:33.648 Attaching 4 probes... 00:37:33.648 @path[10.0.0.2, 4420]: 16414 00:37:33.648 @path[10.0.0.2, 4420]: 16788 00:37:33.648 @path[10.0.0.2, 4420]: 16724 00:37:33.648 @path[10.0.0.2, 4420]: 16965 00:37:33.648 @path[10.0.0.2, 4420]: 17036 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 126707 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:33.648 13:21:45 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:33.648 [2024-07-15 13:21:46.037213] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:33.648 13:21:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:33.906 13:21:46 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:37:40.485 13:21:52 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:37:40.485 13:21:52 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=126885 00:37:40.485 13:21:52 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:40.485 13:21:52 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 125906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:47.079 Attaching 4 probes... 00:37:47.079 @path[10.0.0.2, 4421]: 16902 00:37:47.079 @path[10.0.0.2, 4421]: 17251 00:37:47.079 @path[10.0.0.2, 4421]: 17285 00:37:47.079 @path[10.0.0.2, 4421]: 17149 00:37:47.079 @path[10.0.0.2, 4421]: 17147 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 126885 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 125990 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 125990 ']' 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 125990 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125990 00:37:47.079 killing process with pid 125990 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125990' 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 125990 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 125990 00:37:47.079 Connection closed with partial response: 00:37:47.079 00:37:47.079 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 125990 00:37:47.079 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:37:47.079 [2024-07-15 13:21:01.309122] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:37:47.079 [2024-07-15 13:21:01.309234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125990 ] 00:37:47.079 [2024-07-15 13:21:01.443649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.079 [2024-07-15 13:21:01.512465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:47.079 Running I/O for 90 seconds... 00:37:47.079 [2024-07-15 13:21:10.996918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.996985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.997833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.997850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.998031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.998057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.998084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.998101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.998123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.998140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.998170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.998187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.998208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.998225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:47.079 [2024-07-15 13:21:10.998247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.079 [2024-07-15 13:21:10.998264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:10.998286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:10.998302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:10.998340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:10.998359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:11.000109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:11.000155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:11.000194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:11.000233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:11.000271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:11.000309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:11.000347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.000974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.000996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.080 [2024-07-15 13:21:11.001012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.001945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.001972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.002000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.002024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.002041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.002063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.002080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:47.080 [2024-07-15 13:21:11.002101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.080 [2024-07-15 13:21:11.002119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.002976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.002994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.081 [2024-07-15 13:21:11.003107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.081 [2024-07-15 13:21:11.003144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.081 [2024-07-15 13:21:11.003182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.081 [2024-07-15 13:21:11.003221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.081 [2024-07-15 13:21:11.003260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.081 [2024-07-15 13:21:11.003302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.081 [2024-07-15 13:21:11.003342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.081 [2024-07-15 13:21:11.003380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.081 [2024-07-15 13:21:11.003417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:47.081 [2024-07-15 13:21:11.003777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.081 [2024-07-15 13:21:11.003796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.003818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.003835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.003856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.003872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.003894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.003910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.003932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.003958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.003981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.003997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:11.004668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.082 [2024-07-15 13:21:11.004684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.596963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.597981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.597998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.598020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.598036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.598071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.082 [2024-07-15 13:21:17.598092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:47.082 [2024-07-15 13:21:17.598115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.598962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.598985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:47.083 [2024-07-15 13:21:17.599618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.083 [2024-07-15 13:21:17.599635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.599660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.084 [2024-07-15 13:21:17.599693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.599720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.084 [2024-07-15 13:21:17.599737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.599784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.084 [2024-07-15 13:21:17.599804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.599831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.084 [2024-07-15 13:21:17.599848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.599963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.084 [2024-07-15 13:21:17.599989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.084 [2024-07-15 13:21:17.600038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.084 [2024-07-15 13:21:17.600082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.084 [2024-07-15 13:21:17.600127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.084 [2024-07-15 13:21:17.600170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.600961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.600988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.084 [2024-07-15 13:21:17.601640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:47.084 [2024-07-15 13:21:17.601667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.601684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.601710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.601727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.601754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.601785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.601815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.601832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.601859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.601876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.601902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.601919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.601945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.601962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.601988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.085 [2024-07-15 13:21:17.602680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.602723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.602751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.602796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:17.603549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:17.603642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:24.865810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:24.865875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:24.865934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:24.865958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:24.865982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:24.865999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:24.866021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:24.866037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:24.866059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:24.866076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:24.866098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:24.866114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:24.866135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.085 [2024-07-15 13:21:24.866152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:47.085 [2024-07-15 13:21:24.866173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.866972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.866994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.867780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.867982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.868009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.868041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.868059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.868098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.086 [2024-07-15 13:21:24.868117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:47.086 [2024-07-15 13:21:24.868143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.868980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.868996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.087 [2024-07-15 13:21:24.869600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.087 [2024-07-15 13:21:24.869642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:47.087 [2024-07-15 13:21:24.869667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.087 [2024-07-15 13:21:24.869684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.869717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.869735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.869760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.869791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.869818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.869835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.869860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.869877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.869903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.869919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.869945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.869961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.869987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.870004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.870045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.870091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.088 [2024-07-15 13:21:24.870812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.870859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.870912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.870956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.870983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.088 [2024-07-15 13:21:24.871544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:47.088 [2024-07-15 13:21:24.871862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:24.871892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:24.871934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:24.871954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.340073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.340161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.340202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.340240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.340278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.340347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.340388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.340426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.340463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.340500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.340537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.340575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.340873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.340905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.340934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.340963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.340978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.340993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.089 [2024-07-15 13:21:38.341536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.341565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.341595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.341624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.341653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.341682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.089 [2024-07-15 13:21:38.341697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.089 [2024-07-15 13:21:38.341711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.341726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.341740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.341755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.341785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.341810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.341826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.341842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.341855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.341871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.341885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.341901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.341914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.341931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.341945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.341960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.341974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.341989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.090 [2024-07-15 13:21:38.342598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.090 [2024-07-15 13:21:38.342627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.090 [2024-07-15 13:21:38.342656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.090 [2024-07-15 13:21:38.342685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.090 [2024-07-15 13:21:38.342713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.090 [2024-07-15 13:21:38.342742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.090 [2024-07-15 13:21:38.342785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.342980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.342995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.343008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.090 [2024-07-15 13:21:38.343024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.090 [2024-07-15 13:21:38.343037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.091 [2024-07-15 13:21:38.343627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.343702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41480 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.343717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.091 [2024-07-15 13:21:38.343748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.343758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41488 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.343794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.091 [2024-07-15 13:21:38.343836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.343856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41496 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.343879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.091 [2024-07-15 13:21:38.343922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.343942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41504 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.343966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.343991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.091 [2024-07-15 13:21:38.344002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.344014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41512 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.344027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.344041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.091 [2024-07-15 13:21:38.344052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.344062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41520 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.344075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.344089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.091 [2024-07-15 13:21:38.344099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.344109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41528 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.344122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.344135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.091 [2024-07-15 13:21:38.344144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.344155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41536 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.344168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.344181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.091 [2024-07-15 13:21:38.344191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.344201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41544 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.344214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.344227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.091 [2024-07-15 13:21:38.344236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.091 [2024-07-15 13:21:38.344256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41552 len:8 PRP1 0x0 PRP2 0x0 00:37:47.091 [2024-07-15 13:21:38.344271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.091 [2024-07-15 13:21:38.344284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41560 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41568 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41576 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41584 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41592 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41600 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41608 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41616 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41624 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41632 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41640 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41648 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41656 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.344930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:47.092 [2024-07-15 13:21:38.344939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:47.092 [2024-07-15 13:21:38.344950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41664 len:8 PRP1 0x0 PRP2 0x0 00:37:47.092 [2024-07-15 13:21:38.344962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.345022] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2243500 was disconnected and freed. reset controller. 00:37:47.092 [2024-07-15 13:21:38.345173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:47.092 [2024-07-15 13:21:38.345199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.345228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:47.092 [2024-07-15 13:21:38.345242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.345256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:47.092 [2024-07-15 13:21:38.345269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.345283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:47.092 [2024-07-15 13:21:38.345296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.345311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.092 [2024-07-15 13:21:38.345325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:47.092 [2024-07-15 13:21:38.360843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240f4d0 is same with the state(5) to be set 00:37:47.092 [2024-07-15 13:21:38.362677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.092 [2024-07-15 13:21:38.362735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240f4d0 (9): Bad file descriptor 00:37:47.092 [2024-07-15 13:21:38.362928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:47.092 [2024-07-15 13:21:38.362965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240f4d0 with addr=10.0.0.2, port=4421 00:37:47.092 [2024-07-15 13:21:38.362986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240f4d0 is same with the state(5) to be set 00:37:47.092 [2024-07-15 13:21:38.363013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240f4d0 (9): Bad file descriptor 00:37:47.092 [2024-07-15 13:21:38.363038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:47.092 [2024-07-15 13:21:38.363055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:47.092 [2024-07-15 13:21:38.363070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:47.092 [2024-07-15 13:21:38.363101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:47.092 [2024-07-15 13:21:38.363117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:47.092 [2024-07-15 13:21:48.466730] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:47.092 Received shutdown signal, test time was about 55.927814 seconds 00:37:47.092 00:37:47.092 Latency(us) 00:37:47.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.092 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:47.092 Verification LBA range: start 0x0 length 0x4000 00:37:47.092 Nvme0n1 : 55.93 7250.60 28.32 0.00 0.00 17620.54 426.36 7046430.72 00:37:47.092 =================================================================================================================== 00:37:47.092 Total : 7250.60 28.32 0.00 0.00 17620.54 426.36 7046430.72 00:37:47.092 13:21:58 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@492 -- # nvmfcleanup 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.092 rmmod nvme_tcp 00:37:47.092 rmmod nvme_fabrics 00:37:47.092 rmmod nvme_keyring 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:37:47.092 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@493 -- # '[' -n 125906 ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@494 -- # killprocess 125906 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 125906 ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 125906 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125906 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:47.093 killing process with pid 125906 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125906' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 125906 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 125906 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@282 -- # remove_spdk_ns 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:37:47.093 00:37:47.093 real 1m0.615s 00:37:47.093 user 2m38.939s 00:37:47.093 sys 0m24.387s 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:47.093 ************************************ 00:37:47.093 END TEST nvmf_host_multipath 00:37:47.093 ************************************ 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@122 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:47.093 ************************************ 00:37:47.093 START TEST nvmf_timeout 00:37:47.093 ************************************ 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:37:47.093 * Looking for test storage... 00:37:47.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@452 -- # prepare_net_devs 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@414 -- # local -g is_hw=no 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@416 -- # remove_spdk_ns 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@436 -- # nvmf_veth_init 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:47.093 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:37:47.352 Cannot find device "nvmf_tgt_br" 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:37:47.352 Cannot find device "nvmf_tgt_br2" 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@160 -- # true 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:37:47.352 Cannot find device "nvmf_tgt_br" 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:37:47.352 Cannot find device "nvmf_tgt_br2" 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:47.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:47.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:47.352 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:37:47.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:47.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:37:47.353 00:37:47.353 --- 10.0.0.2 ping statistics --- 00:37:47.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.353 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:37:47.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:47.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:37:47.353 00:37:47.353 --- 10.0.0.3 ping statistics --- 00:37:47.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.353 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:37:47.353 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:47.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:47.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:37:47.611 00:37:47.611 --- 10.0.0.1 ping statistics --- 00:37:47.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.611 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@437 -- # return 0 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@485 -- # nvmfpid=127200 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@486 -- # waitforlisten 127200 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 127200 ']' 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:47.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:47.611 13:21:59 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:47.611 [2024-07-15 13:21:59.899469] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:47.611 [2024-07-15 13:21:59.900634] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:37:47.611 [2024-07-15 13:21:59.900719] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:47.611 [2024-07-15 13:22:00.035834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:47.869 [2024-07-15 13:22:00.094844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:47.869 [2024-07-15 13:22:00.094909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:47.869 [2024-07-15 13:22:00.094922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:47.869 [2024-07-15 13:22:00.094930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:47.869 [2024-07-15 13:22:00.094937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:47.869 [2024-07-15 13:22:00.095115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.869 [2024-07-15 13:22:00.095124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.869 [2024-07-15 13:22:00.142339] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:47.869 [2024-07-15 13:22:00.142931] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:47.869 [2024-07-15 13:22:00.142958] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:48.435 13:22:00 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:48.435 13:22:00 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:37:48.435 13:22:00 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:37:48.435 13:22:00 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:48.435 13:22:00 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:48.435 13:22:00 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:48.435 13:22:00 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:48.435 13:22:00 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:48.701 [2024-07-15 13:22:01.107940] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.701 13:22:01 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:48.959 Malloc0 00:37:49.217 13:22:01 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:49.474 13:22:01 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:49.746 13:22:01 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:50.024 [2024-07-15 13:22:02.267993] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.024 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=127291 00:37:50.024 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:37:50.024 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 127291 /var/tmp/bdevperf.sock 00:37:50.024 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 127291 ']' 00:37:50.024 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:50.024 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:50.024 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:50.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:50.024 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:50.024 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:50.024 [2024-07-15 13:22:02.344937] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:37:50.024 [2024-07-15 13:22:02.345042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127291 ] 00:37:50.024 [2024-07-15 13:22:02.483157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.282 [2024-07-15 13:22:02.545142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:50.282 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:50.282 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:37:50.282 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:37:50.541 13:22:02 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:37:51.106 NVMe0n1 00:37:51.106 13:22:03 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=127324 00:37:51.106 13:22:03 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:51.106 13:22:03 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:37:51.106 Running I/O for 10 seconds... 00:37:52.041 13:22:04 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:52.303 [2024-07-15 13:22:04.693236] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693291] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693302] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693311] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693320] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693328] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693336] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693344] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693352] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693360] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693368] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693376] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693384] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693392] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693401] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693409] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693417] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693425] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693433] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693442] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693449] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693457] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693465] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693473] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693481] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693489] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693497] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693505] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693512] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693520] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693528] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693536] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693545] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693554] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693561] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693569] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693577] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693585] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693593] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693601] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693609] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693617] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693625] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693632] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693640] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693648] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693656] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693664] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693672] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693680] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693688] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693696] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693704] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693712] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693720] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693727] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693735] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693743] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693751] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693760] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693783] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693792] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693800] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693808] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693816] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693824] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693832] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693840] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693848] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693856] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693864] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693872] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693880] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693888] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693896] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693904] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.303 [2024-07-15 13:22:04.693912] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693920] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693929] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693938] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693946] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693953] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693961] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693970] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693977] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693985] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.693994] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.694002] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.694010] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.694018] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.694026] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15443e0 is same with the state(5) to be set 00:37:52.304 [2024-07-15 13:22:04.696508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.696746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.696980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.304 [2024-07-15 13:22:04.696989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.304 [2024-07-15 13:22:04.697258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.304 [2024-07-15 13:22:04.697269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.305 [2024-07-15 13:22:04.697279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:52.305 [2024-07-15 13:22:04.697299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.697984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.305 [2024-07-15 13:22:04.697994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.698022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.305 [2024-07-15 13:22:04.698033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:37:52.305 [2024-07-15 13:22:04.698042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.698055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.305 [2024-07-15 13:22:04.698063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.305 [2024-07-15 13:22:04.698071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:37:52.305 [2024-07-15 13:22:04.698080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.698089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.305 [2024-07-15 13:22:04.698098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.305 [2024-07-15 13:22:04.698106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:37:52.305 [2024-07-15 13:22:04.698114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.698124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.305 [2024-07-15 13:22:04.698131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.305 [2024-07-15 13:22:04.698139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:37:52.305 [2024-07-15 13:22:04.698148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.698157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.305 [2024-07-15 13:22:04.698164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.305 [2024-07-15 13:22:04.698172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:37:52.305 [2024-07-15 13:22:04.698182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.305 [2024-07-15 13:22:04.698191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79368 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79376 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.698965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.698975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.698984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.698996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.306 [2024-07-15 13:22:04.699004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80024 len:8 PRP1 0x0 PRP2 0x0 00:37:52.306 [2024-07-15 13:22:04.699013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.306 [2024-07-15 13:22:04.699023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.306 [2024-07-15 13:22:04.699030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.699038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.699047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.699056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.699063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.699073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.699082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.699091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.699099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.699106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.699115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.699125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.699132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.699139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80056 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.699148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.699157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.699165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.699173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80064 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.699182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.699191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.699198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.699206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80072 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.699215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.699225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.699232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.699240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80080 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.699249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 13:22:04 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:37:52.307 [2024-07-15 13:22:04.720514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.720562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.720582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80088 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.720599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.720615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.720626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.720639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80096 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.720654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.720668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.720679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.720693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80104 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.720708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.720723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.720734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.720746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80112 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.720760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.720797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.720809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.720821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80120 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.720836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.720850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.720862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.720874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80128 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.720888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.720903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.720914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.720926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80136 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.720940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.720954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.720966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.720978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80144 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.720992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.721010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.721021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.721034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80152 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.721048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.721062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.721073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.721086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80160 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.721099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.721114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.721125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.721150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80168 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.721166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.721181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.307 [2024-07-15 13:22:04.721192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.307 [2024-07-15 13:22:04.721204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80176 len:8 PRP1 0x0 PRP2 0x0 00:37:52.307 [2024-07-15 13:22:04.721218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.307 [2024-07-15 13:22:04.721234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80184 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80192 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80200 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80208 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79384 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79392 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79400 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79408 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:52.308 [2024-07-15 13:22:04.721837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:52.308 [2024-07-15 13:22:04.721851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:37:52.308 [2024-07-15 13:22:04.721864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.721938] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd078d0 was disconnected and freed. reset controller. 00:37:52.308 [2024-07-15 13:22:04.722128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:52.308 [2024-07-15 13:22:04.722162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.722182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:52.308 [2024-07-15 13:22:04.722196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.722211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:52.308 [2024-07-15 13:22:04.722225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.722241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:52.308 [2024-07-15 13:22:04.722255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.308 [2024-07-15 13:22:04.722270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9a240 is same with the state(5) to be set 00:37:52.308 [2024-07-15 13:22:04.722632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.308 [2024-07-15 13:22:04.722664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9a240 (9): Bad file descriptor 00:37:52.308 [2024-07-15 13:22:04.722820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.308 [2024-07-15 13:22:04.722854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc9a240 with addr=10.0.0.2, port=4420 00:37:52.308 [2024-07-15 13:22:04.722879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9a240 is same with the state(5) to be set 00:37:52.308 [2024-07-15 13:22:04.722920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9a240 (9): Bad file descriptor 00:37:52.308 [2024-07-15 13:22:04.722951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.308 [2024-07-15 13:22:04.722966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.308 [2024-07-15 13:22:04.722982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.308 [2024-07-15 13:22:04.723012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.308 [2024-07-15 13:22:04.723028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:54.867 13:22:06 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:37:54.867 13:22:06 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:37:54.867 13:22:06 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:54.867 [2024-07-15 13:22:06.723200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.867 [2024-07-15 13:22:06.723254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc9a240 with addr=10.0.0.2, port=4420 00:37:54.867 [2024-07-15 13:22:06.723271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9a240 is same with the state(5) to be set 00:37:54.867 [2024-07-15 13:22:06.723298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9a240 (9): Bad file descriptor 00:37:54.867 [2024-07-15 13:22:06.723317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:54.867 [2024-07-15 13:22:06.723327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:54.867 [2024-07-15 13:22:06.723337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:54.867 [2024-07-15 13:22:06.723364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.867 [2024-07-15 13:22:06.723376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:54.867 13:22:06 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:37:54.867 13:22:06 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:37:54.867 13:22:06 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:37:54.867 13:22:06 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:37:55.136 13:22:07 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:37:55.136 13:22:07 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:37:56.543 [2024-07-15 13:22:08.723656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.543 [2024-07-15 13:22:08.723716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc9a240 with addr=10.0.0.2, port=4420 00:37:56.543 [2024-07-15 13:22:08.723734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9a240 is same with the state(5) to be set 00:37:56.543 [2024-07-15 13:22:08.723761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9a240 (9): Bad file descriptor 00:37:56.543 [2024-07-15 13:22:08.723794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:56.543 [2024-07-15 13:22:08.723805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:56.543 [2024-07-15 13:22:08.723816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:56.543 [2024-07-15 13:22:08.723842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:56.543 [2024-07-15 13:22:08.723854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:58.439 [2024-07-15 13:22:10.724018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:58.439 [2024-07-15 13:22:10.724074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:58.439 [2024-07-15 13:22:10.724087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:58.439 [2024-07-15 13:22:10.724097] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:37:58.439 [2024-07-15 13:22:10.724126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:59.372 00:37:59.372 Latency(us) 00:37:59.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:59.372 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:59.372 Verification LBA range: start 0x0 length 0x4000 00:37:59.372 NVMe0n1 : 8.26 1198.70 4.68 15.50 0.00 105376.35 2383.13 7046430.72 00:37:59.372 =================================================================================================================== 00:37:59.372 Total : 1198.70 4.68 15.50 0.00 105376.35 2383.13 7046430.72 00:37:59.372 0 00:37:59.937 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:37:59.937 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:59.937 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:38:00.194 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:38:00.194 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:38:00.194 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:38:00.194 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@65 -- # wait 127324 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 127291 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 127291 ']' 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 127291 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127291 00:38:00.760 killing process with pid 127291 00:38:00.760 Received shutdown signal, test time was about 9.518225 seconds 00:38:00.760 00:38:00.760 Latency(us) 00:38:00.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:00.760 =================================================================================================================== 00:38:00.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127291' 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 127291 00:38:00.760 13:22:12 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 127291 00:38:00.760 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:01.021 [2024-07-15 13:22:13.375980] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:01.021 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=127473 00:38:01.021 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:38:01.021 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 127473 /var/tmp/bdevperf.sock 00:38:01.021 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 127473 ']' 00:38:01.021 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:01.021 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:01.021 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:01.021 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:01.021 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:01.021 [2024-07-15 13:22:13.464423] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:38:01.021 [2024-07-15 13:22:13.464553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127473 ] 00:38:01.278 [2024-07-15 13:22:13.606930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.278 [2024-07-15 13:22:13.665409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:01.278 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:01.278 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:38:01.278 13:22:13 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:38:01.843 13:22:14 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:38:02.100 NVMe0n1 00:38:02.100 13:22:14 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=127507 00:38:02.100 13:22:14 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:02.100 13:22:14 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:38:02.357 Running I/O for 10 seconds... 00:38:03.295 13:22:15 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:03.295 [2024-07-15 13:22:15.715939] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.715999] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716019] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716033] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716046] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716059] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716072] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716086] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716099] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716111] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716124] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716136] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716148] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716161] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716174] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716187] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716200] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716214] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716228] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716241] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716255] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716268] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716281] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716293] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716307] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716321] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716335] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716349] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716362] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716375] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716388] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716400] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716414] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716428] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716443] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716458] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716471] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716485] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716499] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716511] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716525] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716539] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716551] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716565] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716579] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716591] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716604] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716617] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716630] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716644] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716658] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716672] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716686] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716700] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716724] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716738] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716751] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716784] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716802] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716816] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716829] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716843] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716858] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716872] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716886] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716899] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716912] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716925] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716938] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.295 [2024-07-15 13:22:15.716951] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.716965] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.716978] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.716992] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717007] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717022] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717035] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717048] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717061] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717073] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717087] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717102] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717116] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717129] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717141] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717154] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717167] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717180] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717194] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717207] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717220] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717233] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717246] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717258] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717272] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717285] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717298] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717313] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717327] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717341] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717355] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717367] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717380] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717394] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717407] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717422] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717436] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717450] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717464] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717479] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717494] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717510] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717523] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717536] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717549] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717563] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717577] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717590] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717604] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717618] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717632] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717646] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717661] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717675] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717689] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717702] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717714] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f3fa0 is same with the state(5) to be set 00:38:03.296 [2024-07-15 13:22:15.717985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.296 [2024-07-15 13:22:15.718359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.296 [2024-07-15 13:22:15.718369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.718985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.718996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.719018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.719040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.719062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.297 [2024-07-15 13:22:15.719084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.297 [2024-07-15 13:22:15.719313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.297 [2024-07-15 13:22:15.719322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.719975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.719991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.720017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.720040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.720078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:03.298 [2024-07-15 13:22:15.720112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.298 [2024-07-15 13:22:15.720436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.298 [2024-07-15 13:22:15.720455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.720976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.720993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.299 [2024-07-15 13:22:15.721380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c68d0 is same with the state(5) to be set 00:38:03.299 [2024-07-15 13:22:15.721416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:03.299 [2024-07-15 13:22:15.721428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:03.299 [2024-07-15 13:22:15.721442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81728 len:8 PRP1 0x0 PRP2 0x0 00:38:03.299 [2024-07-15 13:22:15.721458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:03.299 [2024-07-15 13:22:15.721517] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19c68d0 was disconnected and freed. reset controller. 00:38:03.299 [2024-07-15 13:22:15.721830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.299 [2024-07-15 13:22:15.721950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1959240 (9): Bad file descriptor 00:38:03.299 [2024-07-15 13:22:15.722115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.299 [2024-07-15 13:22:15.722145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1959240 with addr=10.0.0.2, port=4420 00:38:03.299 [2024-07-15 13:22:15.722161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959240 is same with the state(5) to be set 00:38:03.299 [2024-07-15 13:22:15.722187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1959240 (9): Bad file descriptor 00:38:03.299 [2024-07-15 13:22:15.722209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.299 [2024-07-15 13:22:15.722225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.299 [2024-07-15 13:22:15.722241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.299 [2024-07-15 13:22:15.722269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.299 [2024-07-15 13:22:15.722285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.299 13:22:15 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:38:04.674 [2024-07-15 13:22:16.722450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:04.674 [2024-07-15 13:22:16.722522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1959240 with addr=10.0.0.2, port=4420 00:38:04.674 [2024-07-15 13:22:16.722540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959240 is same with the state(5) to be set 00:38:04.674 [2024-07-15 13:22:16.722568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1959240 (9): Bad file descriptor 00:38:04.674 [2024-07-15 13:22:16.722600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:04.674 [2024-07-15 13:22:16.722613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:04.674 [2024-07-15 13:22:16.722624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:04.674 [2024-07-15 13:22:16.722652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:04.674 [2024-07-15 13:22:16.722664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:04.674 13:22:16 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:04.674 [2024-07-15 13:22:17.027975] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:04.674 13:22:17 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@92 -- # wait 127507 00:38:05.608 [2024-07-15 13:22:17.725951] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:12.160 00:38:12.160 Latency(us) 00:38:12.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.160 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:12.160 Verification LBA range: start 0x0 length 0x4000 00:38:12.160 NVMe0n1 : 10.01 6213.04 24.27 0.00 0.00 20556.54 2219.29 3019898.88 00:38:12.160 =================================================================================================================== 00:38:12.160 Total : 6213.04 24.27 0.00 0.00 20556.54 2219.29 3019898.88 00:38:12.160 0 00:38:12.160 13:22:24 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=127619 00:38:12.160 13:22:24 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:12.160 13:22:24 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:38:12.418 Running I/O for 10 seconds... 00:38:13.370 13:22:25 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:13.630 [2024-07-15 13:22:25.893141] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893204] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893221] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893231] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893241] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893251] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893261] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893271] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893281] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893291] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.630 [2024-07-15 13:22:25.893301] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893311] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893320] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893330] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893340] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893349] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893359] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893369] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893378] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893388] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893398] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893407] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893418] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893428] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893438] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4310 is same with the state(5) to be set 00:38:13.631 [2024-07-15 13:22:25.893810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.893854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.893878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.893890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.893903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.893913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.893925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.893934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.893945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.893955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.893967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.893976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.893987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.893997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.631 [2024-07-15 13:22:25.894451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.631 [2024-07-15 13:22:25.894598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.631 [2024-07-15 13:22:25.894609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.894981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.894990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.632 [2024-07-15 13:22:25.895490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.632 [2024-07-15 13:22:25.895501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.895984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.895995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.896005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.896025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.896047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.896068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.896089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.896110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.896131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.896151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.633 [2024-07-15 13:22:25.896172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.633 [2024-07-15 13:22:25.896215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80664 len:8 PRP1 0x0 PRP2 0x0 00:38:13.633 [2024-07-15 13:22:25.896224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.633 [2024-07-15 13:22:25.896249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.633 [2024-07-15 13:22:25.896257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80672 len:8 PRP1 0x0 PRP2 0x0 00:38:13.633 [2024-07-15 13:22:25.896267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.633 [2024-07-15 13:22:25.896285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.633 [2024-07-15 13:22:25.896293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80680 len:8 PRP1 0x0 PRP2 0x0 00:38:13.633 [2024-07-15 13:22:25.896302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.633 [2024-07-15 13:22:25.896319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.633 [2024-07-15 13:22:25.896327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80688 len:8 PRP1 0x0 PRP2 0x0 00:38:13.633 [2024-07-15 13:22:25.896336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.633 [2024-07-15 13:22:25.896353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.633 [2024-07-15 13:22:25.896361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80696 len:8 PRP1 0x0 PRP2 0x0 00:38:13.633 [2024-07-15 13:22:25.896370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.633 [2024-07-15 13:22:25.896387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.633 [2024-07-15 13:22:25.896395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80704 len:8 PRP1 0x0 PRP2 0x0 00:38:13.633 [2024-07-15 13:22:25.896404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.633 [2024-07-15 13:22:25.896413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.633 [2024-07-15 13:22:25.896421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.633 [2024-07-15 13:22:25.896428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80712 len:8 PRP1 0x0 PRP2 0x0 00:38:13.633 [2024-07-15 13:22:25.896437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80720 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80728 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80736 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80744 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80752 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79848 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.896811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.896819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.896828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.896837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.906647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.906722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.906749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.906809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:13.634 [2024-07-15 13:22:25.906830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:13.634 [2024-07-15 13:22:25.906849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:38:13.634 [2024-07-15 13:22:25.906869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.906996] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19c8670 was disconnected and freed. reset controller. 00:38:13.634 [2024-07-15 13:22:25.907126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:13.634 [2024-07-15 13:22:25.907154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.907168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:13.634 [2024-07-15 13:22:25.907177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.907188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:13.634 [2024-07-15 13:22:25.907197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.907207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:13.634 [2024-07-15 13:22:25.907215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.634 [2024-07-15 13:22:25.907225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959240 is same with the state(5) to be set 00:38:13.634 [2024-07-15 13:22:25.907459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.634 [2024-07-15 13:22:25.907491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1959240 (9): Bad file descriptor 00:38:13.634 [2024-07-15 13:22:25.907595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.634 [2024-07-15 13:22:25.907617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1959240 with addr=10.0.0.2, port=4420 00:38:13.634 [2024-07-15 13:22:25.907628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959240 is same with the state(5) to be set 00:38:13.634 [2024-07-15 13:22:25.907647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1959240 (9): Bad file descriptor 00:38:13.634 [2024-07-15 13:22:25.907662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:13.634 [2024-07-15 13:22:25.907672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:13.634 [2024-07-15 13:22:25.907683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:13.634 [2024-07-15 13:22:25.907720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:13.634 [2024-07-15 13:22:25.907734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:13.634 13:22:25 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:38:14.567 [2024-07-15 13:22:26.907896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.567 [2024-07-15 13:22:26.907974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1959240 with addr=10.0.0.2, port=4420 00:38:14.567 [2024-07-15 13:22:26.907993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959240 is same with the state(5) to be set 00:38:14.567 [2024-07-15 13:22:26.908023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1959240 (9): Bad file descriptor 00:38:14.567 [2024-07-15 13:22:26.908043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:14.567 [2024-07-15 13:22:26.908055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:14.567 [2024-07-15 13:22:26.908065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:14.567 [2024-07-15 13:22:26.908094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:14.567 [2024-07-15 13:22:26.908117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.501 [2024-07-15 13:22:27.908254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.501 [2024-07-15 13:22:27.908324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1959240 with addr=10.0.0.2, port=4420 00:38:15.501 [2024-07-15 13:22:27.908342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959240 is same with the state(5) to be set 00:38:15.501 [2024-07-15 13:22:27.908374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1959240 (9): Bad file descriptor 00:38:15.501 [2024-07-15 13:22:27.908394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.501 [2024-07-15 13:22:27.908405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.501 [2024-07-15 13:22:27.908415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.501 [2024-07-15 13:22:27.908443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.501 [2024-07-15 13:22:27.908455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.875 [2024-07-15 13:22:28.912202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.875 [2024-07-15 13:22:28.912285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1959240 with addr=10.0.0.2, port=4420 00:38:16.875 [2024-07-15 13:22:28.912303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959240 is same with the state(5) to be set 00:38:16.875 [2024-07-15 13:22:28.912565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1959240 (9): Bad file descriptor 00:38:16.875 [2024-07-15 13:22:28.912843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.875 [2024-07-15 13:22:28.912868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.875 [2024-07-15 13:22:28.912880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.875 [2024-07-15 13:22:28.916839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.875 [2024-07-15 13:22:28.916879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.875 13:22:28 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:16.875 [2024-07-15 13:22:29.191946] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:16.875 13:22:29 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@103 -- # wait 127619 00:38:17.820 [2024-07-15 13:22:29.947981] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:23.081 00:38:23.081 Latency(us) 00:38:23.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.081 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:23.081 Verification LBA range: start 0x0 length 0x4000 00:38:23.081 NVMe0n1 : 10.01 5290.56 20.67 3510.99 0.00 14501.96 640.47 3035150.89 00:38:23.081 =================================================================================================================== 00:38:23.081 Total : 5290.56 20.67 3510.99 0.00 14501.96 0.00 3035150.89 00:38:23.081 0 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 127473 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 127473 ']' 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 127473 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127473 00:38:23.081 killing process with pid 127473 00:38:23.081 Received shutdown signal, test time was about 10.000000 seconds 00:38:23.081 00:38:23.081 Latency(us) 00:38:23.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.081 =================================================================================================================== 00:38:23.081 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127473' 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 127473 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 127473 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=127731 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 127731 /var/tmp/bdevperf.sock 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 127731 ']' 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:23.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:23.081 13:22:34 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:23.081 [2024-07-15 13:22:34.994909] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:38:23.081 [2024-07-15 13:22:34.995037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127731 ] 00:38:23.081 [2024-07-15 13:22:35.143314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.081 [2024-07-15 13:22:35.202460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:23.081 13:22:35 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:23.081 13:22:35 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:38:23.081 13:22:35 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=127750 00:38:23.081 13:22:35 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 127731 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:38:23.081 13:22:35 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:38:23.340 13:22:35 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:38:23.597 NVMe0n1 00:38:23.597 13:22:35 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:23.597 13:22:35 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=127799 00:38:23.597 13:22:35 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:38:23.597 Running I/O for 10 seconds... 00:38:24.530 13:22:36 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:24.790 [2024-07-15 13:22:37.197410] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197471] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197483] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197492] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197500] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197509] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197518] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197526] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197535] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197543] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197551] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197559] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197568] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197576] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197584] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197592] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197600] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197608] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197616] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197624] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197632] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197640] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197648] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197656] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197665] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197673] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197681] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197690] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197698] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197706] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197714] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197723] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197731] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197739] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197749] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.197757] tcp.c:1704:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1744760 is same with the state(5) to be set 00:38:24.790 [2024-07-15 13:22:37.198091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.790 [2024-07-15 13:22:37.198142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.790 [2024-07-15 13:22:37.198167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.790 [2024-07-15 13:22:37.198178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.790 [2024-07-15 13:22:37.198190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.790 [2024-07-15 13:22:37.198200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.790 [2024-07-15 13:22:37.198213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.790 [2024-07-15 13:22:37.198222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.790 [2024-07-15 13:22:37.198234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.198975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.198985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.791 [2024-07-15 13:22:37.199299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.791 [2024-07-15 13:22:37.199310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.199971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.199992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.792 [2024-07-15 13:22:37.200459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.792 [2024-07-15 13:22:37.200469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.200988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.200998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.793 [2024-07-15 13:22:37.201450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:24.793 [2024-07-15 13:22:37.201518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:24.793 [2024-07-15 13:22:37.201535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45912 len:8 PRP1 0x0 PRP2 0x0 00:38:24.793 [2024-07-15 13:22:37.201551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201617] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fb28d0 was disconnected and freed. reset controller. 00:38:24.793 [2024-07-15 13:22:37.201737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:24.793 [2024-07-15 13:22:37.201753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.793 [2024-07-15 13:22:37.201778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:24.794 [2024-07-15 13:22:37.201790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.794 [2024-07-15 13:22:37.201806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:24.794 [2024-07-15 13:22:37.201821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.794 [2024-07-15 13:22:37.201841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:24.794 [2024-07-15 13:22:37.201859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:24.794 [2024-07-15 13:22:37.201875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f45240 is same with the state(5) to be set 00:38:24.794 [2024-07-15 13:22:37.202173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.794 [2024-07-15 13:22:37.202206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45240 (9): Bad file descriptor 00:38:24.794 [2024-07-15 13:22:37.202332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:24.794 [2024-07-15 13:22:37.202361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f45240 with addr=10.0.0.2, port=4420 00:38:24.794 [2024-07-15 13:22:37.202373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f45240 is same with the state(5) to be set 00:38:24.794 [2024-07-15 13:22:37.202394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45240 (9): Bad file descriptor 00:38:24.794 [2024-07-15 13:22:37.202410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:24.794 [2024-07-15 13:22:37.202420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:24.794 [2024-07-15 13:22:37.202430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:24.794 [2024-07-15 13:22:37.202453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:24.794 [2024-07-15 13:22:37.202471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:24.794 13:22:37 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@128 -- # wait 127799 00:38:27.323 [2024-07-15 13:22:39.202740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.323 [2024-07-15 13:22:39.202816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f45240 with addr=10.0.0.2, port=4420 00:38:27.323 [2024-07-15 13:22:39.202833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f45240 is same with the state(5) to be set 00:38:27.323 [2024-07-15 13:22:39.202861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45240 (9): Bad file descriptor 00:38:27.323 [2024-07-15 13:22:39.202880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:27.323 [2024-07-15 13:22:39.202891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:27.323 [2024-07-15 13:22:39.202902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:27.323 [2024-07-15 13:22:39.202929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:27.323 [2024-07-15 13:22:39.202941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:29.225 [2024-07-15 13:22:41.203285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:29.225 [2024-07-15 13:22:41.203376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f45240 with addr=10.0.0.2, port=4420 00:38:29.225 [2024-07-15 13:22:41.203404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f45240 is same with the state(5) to be set 00:38:29.225 [2024-07-15 13:22:41.203455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45240 (9): Bad file descriptor 00:38:29.226 [2024-07-15 13:22:41.203503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:29.226 [2024-07-15 13:22:41.203522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:29.226 [2024-07-15 13:22:41.203540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:29.226 [2024-07-15 13:22:41.203579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:29.226 [2024-07-15 13:22:41.203600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:31.123 [2024-07-15 13:22:43.203677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:31.123 [2024-07-15 13:22:43.203748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:31.123 [2024-07-15 13:22:43.203761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:31.123 [2024-07-15 13:22:43.203781] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:38:31.123 [2024-07-15 13:22:43.203810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:31.744 00:38:31.744 Latency(us) 00:38:31.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.744 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:38:31.744 NVMe0n1 : 8.19 2700.18 10.55 15.62 0.00 47078.60 4051.32 7015926.69 00:38:31.744 =================================================================================================================== 00:38:31.744 Total : 2700.18 10.55 15.62 0.00 47078.60 4051.32 7015926.69 00:38:31.744 0 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:32.002 Attaching 5 probes... 00:38:32.002 1387.239359: reset bdev controller NVMe0 00:38:32.002 1387.328432: reconnect bdev controller NVMe0 00:38:32.002 3387.682269: reconnect delay bdev controller NVMe0 00:38:32.002 3387.708027: reconnect bdev controller NVMe0 00:38:32.002 5388.195684: reconnect delay bdev controller NVMe0 00:38:32.002 5388.225380: reconnect bdev controller NVMe0 00:38:32.002 7388.731172: reconnect delay bdev controller NVMe0 00:38:32.002 7388.756201: reconnect bdev controller NVMe0 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@136 -- # kill 127750 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 127731 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 127731 ']' 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 127731 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127731 00:38:32.002 killing process with pid 127731 00:38:32.002 Received shutdown signal, test time was about 8.246216 seconds 00:38:32.002 00:38:32.002 Latency(us) 00:38:32.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.002 =================================================================================================================== 00:38:32.002 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127731' 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 127731 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 127731 00:38:32.002 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:32.260 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:38:32.260 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:38:32.260 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@492 -- # nvmfcleanup 00:38:32.260 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:38:32.260 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:32.260 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:38:32.260 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:32.260 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:32.260 rmmod nvme_tcp 00:38:32.260 rmmod nvme_fabrics 00:38:32.260 rmmod nvme_keyring 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@493 -- # '[' -n 127200 ']' 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@494 -- # killprocess 127200 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 127200 ']' 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 127200 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127200 00:38:32.518 killing process with pid 127200 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127200' 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 127200 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 127200 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@282 -- # remove_spdk_ns 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:38:32.518 00:38:32.518 real 0m45.558s 00:38:32.518 user 2m2.125s 00:38:32.518 sys 0m13.462s 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:32.518 ************************************ 00:38:32.518 END TEST nvmf_timeout 00:38:32.518 ************************************ 00:38:32.518 13:22:44 nvmf_tcp_interrupt_mode.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:32.776 13:22:45 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1142 -- # return 0 00:38:32.776 13:22:45 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@125 -- # [[ virt == phy ]] 00:38:32.776 13:22:45 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@130 -- # timing_exit host 00:38:32.776 13:22:45 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:32.776 13:22:45 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:32.776 13:22:45 nvmf_tcp_interrupt_mode -- nvmf/nvmf.sh@132 -- # trap - SIGINT SIGTERM EXIT 00:38:32.776 00:38:32.776 real 15m58.871s 00:38:32.776 user 35m27.349s 00:38:32.776 sys 5m43.151s 00:38:32.776 13:22:45 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:32.776 13:22:45 nvmf_tcp_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:32.776 ************************************ 00:38:32.776 END TEST nvmf_tcp_interrupt_mode 00:38:32.776 ************************************ 00:38:32.776 13:22:45 -- common/autotest_common.sh@1142 -- # return 0 00:38:32.776 13:22:45 -- spdk/autotest.sh@291 -- # unset TEST_INTERRUPT_MODE 00:38:32.776 13:22:45 -- spdk/autotest.sh@292 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:32.776 13:22:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:32.776 13:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:32.776 13:22:45 -- common/autotest_common.sh@10 -- # set +x 00:38:32.776 ************************************ 00:38:32.776 START TEST spdkcli_nvmf_tcp 00:38:32.776 ************************************ 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:32.776 * Looking for test storage... 00:38:32.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:32.776 13:22:45 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:32.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=128007 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 128007 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 128007 ']' 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:32.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:32.777 13:22:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:33.033 [2024-07-15 13:22:45.264159] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:38:33.033 [2024-07-15 13:22:45.264271] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128007 ] 00:38:33.033 [2024-07-15 13:22:45.401671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:33.033 [2024-07-15 13:22:45.472787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.033 [2024-07-15 13:22:45.472795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:33.964 13:22:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:33.964 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:33.964 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:33.964 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:33.964 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:33.964 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:33.964 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:33.964 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:33.964 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:33.964 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:33.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:33.964 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:33.964 ' 00:38:36.488 [2024-07-15 13:22:48.874813] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:37.859 [2024-07-15 13:22:50.155848] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:40.387 [2024-07-15 13:22:52.525406] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:42.329 [2024-07-15 13:22:54.614678] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:43.701 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:43.701 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:43.701 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:43.701 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:43.701 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:43.701 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:43.701 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:43.701 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:43.701 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:43.701 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:43.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:43.702 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:43.702 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:43.702 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:43.702 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:43.702 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:43.702 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:43.959 13:22:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:43.959 13:22:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:43.959 13:22:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:43.959 13:22:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:43.959 13:22:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:43.959 13:22:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:43.959 13:22:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:43.959 13:22:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:38:44.525 13:22:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:44.525 13:22:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:44.525 13:22:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:44.525 13:22:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:44.525 13:22:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:44.525 13:22:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:44.525 13:22:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:44.525 13:22:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:44.525 13:22:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:44.525 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:44.525 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:44.525 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:44.525 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:44.525 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:44.525 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:44.525 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:44.525 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:44.525 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:44.525 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:44.525 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:44.525 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:44.525 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:44.525 ' 00:38:49.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:49.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:49.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:49.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:49.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:49.788 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:49.788 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:49.788 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:49.788 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:49.788 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:49.788 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:49.788 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:49.788 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:49.788 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:49.788 13:23:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:49.788 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:49.788 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 128007 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 128007 ']' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 128007 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128007 00:38:50.046 killing process with pid 128007 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128007' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 128007 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 128007 00:38:50.046 Process with pid 128007 is not found 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 128007 ']' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 128007 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 128007 ']' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 128007 00:38:50.046 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (128007) - No such process 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 128007 is not found' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:50.046 ************************************ 00:38:50.046 END TEST spdkcli_nvmf_tcp 00:38:50.046 ************************************ 00:38:50.046 00:38:50.046 real 0m17.382s 00:38:50.046 user 0m37.768s 00:38:50.046 sys 0m0.839s 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:50.046 13:23:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:50.343 13:23:02 -- common/autotest_common.sh@1142 -- # return 0 00:38:50.343 13:23:02 -- spdk/autotest.sh@293 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:50.343 13:23:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:50.343 13:23:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:50.343 13:23:02 -- common/autotest_common.sh@10 -- # set +x 00:38:50.343 ************************************ 00:38:50.343 START TEST nvmf_identify_passthru 00:38:50.343 ************************************ 00:38:50.343 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:50.343 * Looking for test storage... 00:38:50.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:50.343 13:23:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:38:50.343 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:50.344 13:23:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.344 13:23:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.344 13:23:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:50.344 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:50.344 13:23:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:50.344 13:23:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.344 13:23:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.344 13:23:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:50.344 13:23:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.344 13:23:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@452 -- # prepare_net_devs 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # local -g is_hw=no 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # remove_spdk_ns 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.344 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:50.344 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@436 -- # nvmf_veth_init 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:38:50.344 Cannot find device "nvmf_tgt_br" 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:38:50.344 Cannot find device "nvmf_tgt_br2" 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@160 -- # true 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:38:50.344 Cannot find device "nvmf_tgt_br" 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:38:50.344 Cannot find device "nvmf_tgt_br2" 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:50.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:50.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:50.344 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:38:50.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:50.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:38:50.619 00:38:50.619 --- 10.0.0.2 ping statistics --- 00:38:50.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.619 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:38:50.619 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:50.619 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:38:50.619 00:38:50.619 --- 10.0.0.3 ping statistics --- 00:38:50.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.619 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:50.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:50.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:38:50.619 00:38:50.619 --- 10.0.0.1 ping statistics --- 00:38:50.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.619 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@437 -- # return 0 00:38:50.619 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:38:50.620 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:50.620 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:38:50.620 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:38:50.620 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:50.620 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:38:50.620 13:23:02 nvmf_identify_passthru -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:38:50.620 13:23:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:50.620 13:23:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:50.620 13:23:02 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:38:50.620 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:38:50.620 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:38:50.620 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:38:50.620 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:38:50.620 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:38:50.620 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:38:50.620 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:50.620 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:50.878 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:38:50.878 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:38:50.878 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:50.878 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:51.137 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:38:51.137 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:51.137 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:51.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.137 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=128493 00:38:51.137 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:51.137 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:51.137 13:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 128493 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 128493 ']' 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:51.137 13:23:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:51.137 [2024-07-15 13:23:03.491883] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:38:51.137 [2024-07-15 13:23:03.492204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:51.394 [2024-07-15 13:23:03.635680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:51.394 [2024-07-15 13:23:03.708193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:51.394 [2024-07-15 13:23:03.708495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:51.394 [2024-07-15 13:23:03.708662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:51.394 [2024-07-15 13:23:03.708849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:51.394 [2024-07-15 13:23:03.708893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:51.394 [2024-07-15 13:23:03.709073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:51.394 [2024-07-15 13:23:03.709622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:51.394 [2024-07-15 13:23:03.709836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:51.394 [2024-07-15 13:23:03.709921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.330 [2024-07-15 13:23:04.621617] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.330 [2024-07-15 13:23:04.631021] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.330 Nvme0n1 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.330 [2024-07-15 13:23:04.765749] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.330 [ 00:38:52.330 { 00:38:52.330 "allow_any_host": true, 00:38:52.330 "hosts": [], 00:38:52.330 "listen_addresses": [], 00:38:52.330 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:52.330 "subtype": "Discovery" 00:38:52.330 }, 00:38:52.330 { 00:38:52.330 "allow_any_host": true, 00:38:52.330 "hosts": [], 00:38:52.330 "listen_addresses": [ 00:38:52.330 { 00:38:52.330 "adrfam": "IPv4", 00:38:52.330 "traddr": "10.0.0.2", 00:38:52.330 "trsvcid": "4420", 00:38:52.330 "trtype": "TCP" 00:38:52.330 } 00:38:52.330 ], 00:38:52.330 "max_cntlid": 65519, 00:38:52.330 "max_namespaces": 1, 00:38:52.330 "min_cntlid": 1, 00:38:52.330 "model_number": "SPDK bdev Controller", 00:38:52.330 "namespaces": [ 00:38:52.330 { 00:38:52.330 "bdev_name": "Nvme0n1", 00:38:52.330 "name": "Nvme0n1", 00:38:52.330 "nguid": "ECFD613DDDA340BC8CF90FBB24ADF71A", 00:38:52.330 "nsid": 1, 00:38:52.330 "uuid": "ecfd613d-dda3-40bc-8cf9-0fbb24adf71a" 00:38:52.330 } 00:38:52.330 ], 00:38:52.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:52.330 "serial_number": "SPDK00000000000001", 00:38:52.330 "subtype": "NVMe" 00:38:52.330 } 00:38:52.330 ] 00:38:52.330 13:23:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:52.330 13:23:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:52.588 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:38:52.588 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:52.588 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:52.588 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:52.845 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:38:52.846 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:38:52.846 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:38:52.846 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:52.846 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.846 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.846 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.846 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:52.846 13:23:05 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:52.846 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # nvmfcleanup 00:38:52.846 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:53.104 rmmod nvme_tcp 00:38:53.104 rmmod nvme_fabrics 00:38:53.104 rmmod nvme_keyring 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@493 -- # '[' -n 128493 ']' 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@494 -- # killprocess 128493 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 128493 ']' 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 128493 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128493 00:38:53.104 killing process with pid 128493 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128493' 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 128493 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 128493 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@282 -- # remove_spdk_ns 00:38:53.104 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:53.104 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.362 13:23:05 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:38:53.362 ************************************ 00:38:53.362 END TEST nvmf_identify_passthru 00:38:53.362 ************************************ 00:38:53.362 00:38:53.362 real 0m3.069s 00:38:53.362 user 0m7.915s 00:38:53.362 sys 0m0.748s 00:38:53.362 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:53.362 13:23:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:53.362 13:23:05 -- common/autotest_common.sh@1142 -- # return 0 00:38:53.362 13:23:05 -- spdk/autotest.sh@295 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:38:53.362 13:23:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:53.362 13:23:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:53.362 13:23:05 -- common/autotest_common.sh@10 -- # set +x 00:38:53.362 ************************************ 00:38:53.362 START TEST nvmf_dif 00:38:53.362 ************************************ 00:38:53.362 13:23:05 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:38:53.362 * Looking for test storage... 00:38:53.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:53.362 13:23:05 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:53.362 13:23:05 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.362 13:23:05 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.362 13:23:05 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.362 13:23:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.362 13:23:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.362 13:23:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.362 13:23:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:53.362 13:23:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:53.362 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:53.362 13:23:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:53.362 13:23:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:53.362 13:23:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:53.362 13:23:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:53.362 13:23:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@452 -- # prepare_net_devs 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@414 -- # local -g is_hw=no 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@416 -- # remove_spdk_ns 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.362 13:23:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:53.362 13:23:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@436 -- # nvmf_veth_init 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:38:53.362 Cannot find device "nvmf_tgt_br" 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@159 -- # true 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:38:53.362 Cannot find device "nvmf_tgt_br2" 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@160 -- # true 00:38:53.362 13:23:05 nvmf_dif -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:38:53.363 13:23:05 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:38:53.363 Cannot find device "nvmf_tgt_br" 00:38:53.363 13:23:05 nvmf_dif -- nvmf/common.sh@162 -- # true 00:38:53.363 13:23:05 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:38:53.363 Cannot find device "nvmf_tgt_br2" 00:38:53.363 13:23:05 nvmf_dif -- nvmf/common.sh@163 -- # true 00:38:53.363 13:23:05 nvmf_dif -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:53.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@166 -- # true 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:53.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@167 -- # true 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:53.620 13:23:05 nvmf_dif -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:38:53.620 13:23:06 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:38:53.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:53.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:38:53.878 00:38:53.878 --- 10.0.0.2 ping statistics --- 00:38:53.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:53.878 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:38:53.878 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:53.878 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:38:53.878 00:38:53.878 --- 10.0.0.3 ping statistics --- 00:38:53.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:53.878 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:53.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:53.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:38:53.878 00:38:53.878 --- 10.0.0.1 ping statistics --- 00:38:53.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:53.878 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@437 -- # return 0 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@454 -- # '[' iso == iso ']' 00:38:53.878 13:23:06 nvmf_dif -- nvmf/common.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:54.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:54.136 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:54.136 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:38:54.136 13:23:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:54.136 13:23:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:38:54.136 13:23:06 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:54.136 13:23:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@485 -- # nvmfpid=128836 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@486 -- # waitforlisten 128836 00:38:54.136 13:23:06 nvmf_dif -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:54.136 13:23:06 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 128836 ']' 00:38:54.136 13:23:06 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.136 13:23:06 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:54.136 13:23:06 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.136 13:23:06 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:54.136 13:23:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:54.136 [2024-07-15 13:23:06.564240] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:38:54.136 [2024-07-15 13:23:06.564336] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.393 [2024-07-15 13:23:06.703942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.393 [2024-07-15 13:23:06.781432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:54.393 [2024-07-15 13:23:06.781491] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:54.393 [2024-07-15 13:23:06.781505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:54.393 [2024-07-15 13:23:06.781515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:54.393 [2024-07-15 13:23:06.781524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:54.393 [2024-07-15 13:23:06.781557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.651 13:23:06 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:54.651 13:23:06 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:38:54.651 13:23:06 nvmf_dif -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:38:54.652 13:23:06 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:54.652 13:23:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:54.652 13:23:06 nvmf_dif -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:54.652 13:23:06 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:54.652 13:23:06 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:54.652 13:23:06 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.652 13:23:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:54.652 [2024-07-15 13:23:06.917442] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:54.652 13:23:06 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.652 13:23:06 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:54.652 13:23:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:54.652 13:23:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:54.652 13:23:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:54.652 ************************************ 00:38:54.652 START TEST fio_dif_1_default 00:38:54.652 ************************************ 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.652 bdev_null0 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.652 [2024-07-15 13:23:06.961564] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@536 -- # config=() 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@536 -- # local subsystem config 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:38:54.652 { 00:38:54.652 "params": { 00:38:54.652 "name": "Nvme$subsystem", 00:38:54.652 "trtype": "$TEST_TRANSPORT", 00:38:54.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.652 "adrfam": "ipv4", 00:38:54.652 "trsvcid": "$NVMF_PORT", 00:38:54.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.652 "hdgst": ${hdgst:-false}, 00:38:54.652 "ddgst": ${ddgst:-false} 00:38:54.652 }, 00:38:54.652 "method": "bdev_nvme_attach_controller" 00:38:54.652 } 00:38:54.652 EOF 00:38:54.652 )") 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # cat 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # jq . 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@561 -- # IFS=, 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:38:54.652 "params": { 00:38:54.652 "name": "Nvme0", 00:38:54.652 "trtype": "tcp", 00:38:54.652 "traddr": "10.0.0.2", 00:38:54.652 "adrfam": "ipv4", 00:38:54.652 "trsvcid": "4420", 00:38:54.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:54.652 "hdgst": false, 00:38:54.652 "ddgst": false 00:38:54.652 }, 00:38:54.652 "method": "bdev_nvme_attach_controller" 00:38:54.652 }' 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:54.652 13:23:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:54.652 13:23:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:54.652 13:23:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:54.652 13:23:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:54.652 13:23:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.910 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:54.910 fio-3.35 00:38:54.910 Starting 1 thread 00:39:07.204 00:39:07.204 filename0: (groupid=0, jobs=1): err= 0: pid=128906: Mon Jul 15 13:23:17 2024 00:39:07.204 read: IOPS=3681, BW=14.4MiB/s (15.1MB/s)(144MiB/10014msec) 00:39:07.204 slat (nsec): min=6284, max=65541, avg=9078.54, stdev=2922.15 00:39:07.204 clat (usec): min=429, max=42021, avg=1059.35, stdev=4722.45 00:39:07.204 lat (usec): min=436, max=42034, avg=1068.43, stdev=4722.70 00:39:07.204 clat percentiles (usec): 00:39:07.204 | 1.00th=[ 461], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 478], 00:39:07.204 | 30.00th=[ 486], 40.00th=[ 490], 50.00th=[ 494], 60.00th=[ 498], 00:39:07.204 | 70.00th=[ 506], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 619], 00:39:07.204 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:39:07.204 | 99.99th=[42206] 00:39:07.204 bw ( KiB/s): min= 1536, max=23552, per=100.00%, avg=14745.60, stdev=6030.38, samples=20 00:39:07.204 iops : min= 384, max= 5888, avg=3686.40, stdev=1507.60, samples=20 00:39:07.204 lat (usec) : 500=61.67%, 750=36.92%, 1000=0.02% 00:39:07.204 lat (msec) : 2=0.02%, 10=0.01%, 50=1.37% 00:39:07.204 cpu : usr=88.28%, sys=9.87%, ctx=24, majf=0, minf=9 00:39:07.204 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:07.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.205 issued rwts: total=36868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.205 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:07.205 00:39:07.205 Run status group 0 (all jobs): 00:39:07.205 READ: bw=14.4MiB/s (15.1MB/s), 14.4MiB/s-14.4MiB/s (15.1MB/s-15.1MB/s), io=144MiB (151MB), run=10014-10014msec 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 ************************************ 00:39:07.205 END TEST fio_dif_1_default 00:39:07.205 ************************************ 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 00:39:07.205 real 0m10.934s 00:39:07.205 user 0m9.452s 00:39:07.205 sys 0m1.206s 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 13:23:17 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:07.205 13:23:17 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:07.205 13:23:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:07.205 13:23:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 ************************************ 00:39:07.205 START TEST fio_dif_1_multi_subsystems 00:39:07.205 ************************************ 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 bdev_null0 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 [2024-07-15 13:23:17.942162] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 bdev_null1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@536 -- # config=() 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@536 -- # local subsystem config 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:39:07.205 { 00:39:07.205 "params": { 00:39:07.205 "name": "Nvme$subsystem", 00:39:07.205 "trtype": "$TEST_TRANSPORT", 00:39:07.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:07.205 "adrfam": "ipv4", 00:39:07.205 "trsvcid": "$NVMF_PORT", 00:39:07.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:07.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:07.205 "hdgst": ${hdgst:-false}, 00:39:07.205 "ddgst": ${ddgst:-false} 00:39:07.205 }, 00:39:07.205 "method": "bdev_nvme_attach_controller" 00:39:07.205 } 00:39:07.205 EOF 00:39:07.205 )") 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # cat 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:39:07.205 { 00:39:07.205 "params": { 00:39:07.205 "name": "Nvme$subsystem", 00:39:07.205 "trtype": "$TEST_TRANSPORT", 00:39:07.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:07.205 "adrfam": "ipv4", 00:39:07.205 "trsvcid": "$NVMF_PORT", 00:39:07.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:07.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:07.205 "hdgst": ${hdgst:-false}, 00:39:07.205 "ddgst": ${ddgst:-false} 00:39:07.205 }, 00:39:07.205 "method": "bdev_nvme_attach_controller" 00:39:07.205 } 00:39:07.205 EOF 00:39:07.205 )") 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # cat 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # jq . 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@561 -- # IFS=, 00:39:07.205 13:23:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:39:07.205 "params": { 00:39:07.205 "name": "Nvme0", 00:39:07.205 "trtype": "tcp", 00:39:07.205 "traddr": "10.0.0.2", 00:39:07.205 "adrfam": "ipv4", 00:39:07.205 "trsvcid": "4420", 00:39:07.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:07.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:07.206 "hdgst": false, 00:39:07.206 "ddgst": false 00:39:07.206 }, 00:39:07.206 "method": "bdev_nvme_attach_controller" 00:39:07.206 },{ 00:39:07.206 "params": { 00:39:07.206 "name": "Nvme1", 00:39:07.206 "trtype": "tcp", 00:39:07.206 "traddr": "10.0.0.2", 00:39:07.206 "adrfam": "ipv4", 00:39:07.206 "trsvcid": "4420", 00:39:07.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:07.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:07.206 "hdgst": false, 00:39:07.206 "ddgst": false 00:39:07.206 }, 00:39:07.206 "method": "bdev_nvme_attach_controller" 00:39:07.206 }' 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:07.206 13:23:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:07.206 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:07.206 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:07.206 fio-3.35 00:39:07.206 Starting 2 threads 00:39:17.166 00:39:17.166 filename0: (groupid=0, jobs=1): err= 0: pid=129056: Mon Jul 15 13:23:28 2024 00:39:17.166 read: IOPS=505, BW=2021KiB/s (2069kB/s)(19.8MiB/10009msec) 00:39:17.166 slat (usec): min=4, max=243, avg=10.78, stdev= 7.64 00:39:17.166 clat (usec): min=462, max=43850, avg=7883.75, stdev=15464.00 00:39:17.166 lat (usec): min=470, max=43862, avg=7894.53, stdev=15464.80 00:39:17.166 clat percentiles (usec): 00:39:17.166 | 1.00th=[ 474], 5.00th=[ 486], 10.00th=[ 494], 20.00th=[ 506], 00:39:17.166 | 30.00th=[ 523], 40.00th=[ 545], 50.00th=[ 603], 60.00th=[ 717], 00:39:17.166 | 70.00th=[ 1106], 80.00th=[ 1221], 90.00th=[41157], 95.00th=[41157], 00:39:17.166 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[43779], 00:39:17.166 | 99.99th=[43779] 00:39:17.166 bw ( KiB/s): min= 704, max= 5120, per=44.04%, avg=2020.80, stdev=1211.53, samples=20 00:39:17.166 iops : min= 176, max= 1280, avg=505.20, stdev=302.88, samples=20 00:39:17.166 lat (usec) : 500=14.38%, 750=46.48%, 1000=4.33% 00:39:17.166 lat (msec) : 2=16.30%, 4=0.71%, 10=0.08%, 50=17.72% 00:39:17.166 cpu : usr=93.75%, sys=5.18%, ctx=64, majf=0, minf=0 00:39:17.166 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:17.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.166 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:17.166 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:17.166 filename1: (groupid=0, jobs=1): err= 0: pid=129057: Mon Jul 15 13:23:28 2024 00:39:17.166 read: IOPS=641, BW=2566KiB/s (2628kB/s)(25.1MiB/10008msec) 00:39:17.166 slat (nsec): min=7346, max=67753, avg=10163.28, stdev=5366.33 00:39:17.166 clat (usec): min=458, max=42365, avg=6203.59, stdev=13849.10 00:39:17.166 lat (usec): min=466, max=42406, avg=6213.75, stdev=13849.59 00:39:17.166 clat percentiles (usec): 00:39:17.166 | 1.00th=[ 474], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 515], 00:39:17.166 | 30.00th=[ 537], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:39:17.166 | 70.00th=[ 865], 80.00th=[ 1188], 90.00th=[40633], 95.00th=[41157], 00:39:17.166 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:17.166 | 99.99th=[42206] 00:39:17.166 bw ( KiB/s): min= 736, max= 8128, per=55.95%, avg=2566.45, stdev=1844.06, samples=20 00:39:17.166 iops : min= 184, max= 2032, avg=641.60, stdev=461.03, samples=20 00:39:17.166 lat (usec) : 500=12.71%, 750=54.78%, 1000=4.24% 00:39:17.166 lat (msec) : 2=14.00%, 4=0.62%, 10=0.06%, 50=13.58% 00:39:17.166 cpu : usr=94.13%, sys=5.08%, ctx=19, majf=0, minf=0 00:39:17.166 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:17.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.166 issued rwts: total=6420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:17.166 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:17.166 00:39:17.166 Run status group 0 (all jobs): 00:39:17.166 READ: bw=4586KiB/s (4696kB/s), 2021KiB/s-2566KiB/s (2069kB/s-2628kB/s), io=44.8MiB (47.0MB), run=10008-10009msec 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 ************************************ 00:39:17.166 END TEST fio_dif_1_multi_subsystems 00:39:17.166 ************************************ 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.166 00:39:17.166 real 0m11.057s 00:39:17.166 user 0m19.521s 00:39:17.166 sys 0m1.261s 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:17.166 13:23:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 13:23:29 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:17.166 13:23:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:17.166 13:23:29 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:17.166 13:23:29 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:17.166 13:23:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 ************************************ 00:39:17.166 START TEST fio_dif_rand_params 00:39:17.166 ************************************ 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 bdev_null0 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:17.166 [2024-07-15 13:23:29.043491] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:17.166 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # config=() 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # local subsystem config 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:39:17.167 { 00:39:17.167 "params": { 00:39:17.167 "name": "Nvme$subsystem", 00:39:17.167 "trtype": "$TEST_TRANSPORT", 00:39:17.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:17.167 "adrfam": "ipv4", 00:39:17.167 "trsvcid": "$NVMF_PORT", 00:39:17.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:17.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:17.167 "hdgst": ${hdgst:-false}, 00:39:17.167 "ddgst": ${ddgst:-false} 00:39:17.167 }, 00:39:17.167 "method": "bdev_nvme_attach_controller" 00:39:17.167 } 00:39:17.167 EOF 00:39:17.167 )") 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # jq . 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@561 -- # IFS=, 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:39:17.167 "params": { 00:39:17.167 "name": "Nvme0", 00:39:17.167 "trtype": "tcp", 00:39:17.167 "traddr": "10.0.0.2", 00:39:17.167 "adrfam": "ipv4", 00:39:17.167 "trsvcid": "4420", 00:39:17.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:17.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:17.167 "hdgst": false, 00:39:17.167 "ddgst": false 00:39:17.167 }, 00:39:17.167 "method": "bdev_nvme_attach_controller" 00:39:17.167 }' 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:17.167 13:23:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:17.167 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:17.167 ... 00:39:17.167 fio-3.35 00:39:17.167 Starting 3 threads 00:39:22.450 00:39:22.450 filename0: (groupid=0, jobs=1): err= 0: pid=129204: Mon Jul 15 13:23:34 2024 00:39:22.450 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(130MiB/5006msec) 00:39:22.450 slat (nsec): min=5494, max=85530, avg=18765.01, stdev=7763.65 00:39:22.450 clat (usec): min=6840, max=55934, avg=14370.86, stdev=7330.21 00:39:22.450 lat (usec): min=6851, max=55974, avg=14389.63, stdev=7332.01 00:39:22.450 clat percentiles (usec): 00:39:22.450 | 1.00th=[ 7898], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11207], 00:39:22.450 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:39:22.450 | 70.00th=[13173], 80.00th=[14091], 90.00th=[17433], 95.00th=[35390], 00:39:22.450 | 99.00th=[40109], 99.50th=[52691], 99.90th=[55313], 99.95th=[55837], 00:39:22.450 | 99.99th=[55837] 00:39:22.450 bw ( KiB/s): min=10752, max=32256, per=37.98%, avg=26649.60, stdev=6816.47, samples=10 00:39:22.450 iops : min= 84, max= 252, avg=208.20, stdev=53.25, samples=10 00:39:22.450 lat (msec) : 10=7.48%, 20=83.13%, 50=8.82%, 100=0.58% 00:39:22.450 cpu : usr=91.23%, sys=6.91%, ctx=9, majf=0, minf=0 00:39:22.450 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:22.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.450 issued rwts: total=1043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.450 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:22.450 filename0: (groupid=0, jobs=1): err= 0: pid=129205: Mon Jul 15 13:23:34 2024 00:39:22.450 read: IOPS=161, BW=20.2MiB/s (21.1MB/s)(101MiB/5005msec) 00:39:22.450 slat (nsec): min=8160, max=68430, avg=20499.33, stdev=8586.81 00:39:22.450 clat (usec): min=8515, max=51450, avg=18577.27, stdev=8030.16 00:39:22.450 lat (usec): min=8528, max=51464, avg=18597.77, stdev=8030.80 00:39:22.450 clat percentiles (usec): 00:39:22.450 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[12387], 20.00th=[15401], 00:39:22.450 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16712], 60.00th=[16909], 00:39:22.450 | 70.00th=[17433], 80.00th=[18482], 90.00th=[22414], 95.00th=[41157], 00:39:22.450 | 99.00th=[49021], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:39:22.450 | 99.99th=[51643] 00:39:22.450 bw ( KiB/s): min= 8448, max=25600, per=29.37%, avg=20608.00, stdev=5114.66, samples=10 00:39:22.450 iops : min= 66, max= 200, avg=161.00, stdev=39.96, samples=10 00:39:22.450 lat (msec) : 10=2.11%, 20=84.39%, 50=12.76%, 100=0.74% 00:39:22.450 cpu : usr=91.85%, sys=6.51%, ctx=62, majf=0, minf=0 00:39:22.450 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:22.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.450 issued rwts: total=807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.450 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:22.450 filename0: (groupid=0, jobs=1): err= 0: pid=129206: Mon Jul 15 13:23:34 2024 00:39:22.450 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(112MiB/5005msec) 00:39:22.450 slat (nsec): min=7974, max=57101, avg=18494.25, stdev=7746.34 00:39:22.450 clat (usec): min=6935, max=63858, avg=16766.28, stdev=9765.69 00:39:22.450 lat (usec): min=6959, max=63878, avg=16784.78, stdev=9766.72 00:39:22.450 clat percentiles (usec): 00:39:22.450 | 1.00th=[ 7832], 5.00th=[11076], 10.00th=[11731], 20.00th=[12387], 00:39:22.450 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[14091], 00:39:22.450 | 70.00th=[14746], 80.00th=[15795], 90.00th=[35914], 95.00th=[40109], 00:39:22.450 | 99.00th=[56361], 99.50th=[56361], 99.90th=[63701], 99.95th=[63701], 00:39:22.450 | 99.99th=[63701] 00:39:22.450 bw ( KiB/s): min= 9216, max=29696, per=32.55%, avg=22835.20, stdev=7213.37, samples=10 00:39:22.450 iops : min= 72, max= 232, avg=178.40, stdev=56.35, samples=10 00:39:22.450 lat (msec) : 10=3.47%, 20=84.90%, 50=8.95%, 100=2.68% 00:39:22.450 cpu : usr=92.05%, sys=6.27%, ctx=18, majf=0, minf=0 00:39:22.450 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:22.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.450 issued rwts: total=894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.450 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:22.450 00:39:22.450 Run status group 0 (all jobs): 00:39:22.450 READ: bw=68.5MiB/s (71.8MB/s), 20.2MiB/s-26.0MiB/s (21.1MB/s-27.3MB/s), io=343MiB (360MB), run=5005-5006msec 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 bdev_null0 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 [2024-07-15 13:23:34.979668] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 bdev_null1 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 bdev_null2 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.721 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # config=() 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # local subsystem config 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:39:22.722 { 00:39:22.722 "params": { 00:39:22.722 "name": "Nvme$subsystem", 00:39:22.722 "trtype": "$TEST_TRANSPORT", 00:39:22.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:22.722 "adrfam": "ipv4", 00:39:22.722 "trsvcid": "$NVMF_PORT", 00:39:22.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:22.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:22.722 "hdgst": ${hdgst:-false}, 00:39:22.722 "ddgst": ${ddgst:-false} 00:39:22.722 }, 00:39:22.722 "method": "bdev_nvme_attach_controller" 00:39:22.722 } 00:39:22.722 EOF 00:39:22.722 )") 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:39:22.722 { 00:39:22.722 "params": { 00:39:22.722 "name": "Nvme$subsystem", 00:39:22.722 "trtype": "$TEST_TRANSPORT", 00:39:22.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:22.722 "adrfam": "ipv4", 00:39:22.722 "trsvcid": "$NVMF_PORT", 00:39:22.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:22.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:22.722 "hdgst": ${hdgst:-false}, 00:39:22.722 "ddgst": ${ddgst:-false} 00:39:22.722 }, 00:39:22.722 "method": "bdev_nvme_attach_controller" 00:39:22.722 } 00:39:22.722 EOF 00:39:22.722 )") 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:39:22.722 { 00:39:22.722 "params": { 00:39:22.722 "name": "Nvme$subsystem", 00:39:22.722 "trtype": "$TEST_TRANSPORT", 00:39:22.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:22.722 "adrfam": "ipv4", 00:39:22.722 "trsvcid": "$NVMF_PORT", 00:39:22.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:22.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:22.722 "hdgst": ${hdgst:-false}, 00:39:22.722 "ddgst": ${ddgst:-false} 00:39:22.722 }, 00:39:22.722 "method": "bdev_nvme_attach_controller" 00:39:22.722 } 00:39:22.722 EOF 00:39:22.722 )") 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # jq . 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@561 -- # IFS=, 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:39:22.722 "params": { 00:39:22.722 "name": "Nvme0", 00:39:22.722 "trtype": "tcp", 00:39:22.722 "traddr": "10.0.0.2", 00:39:22.722 "adrfam": "ipv4", 00:39:22.722 "trsvcid": "4420", 00:39:22.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:22.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:22.722 "hdgst": false, 00:39:22.722 "ddgst": false 00:39:22.722 }, 00:39:22.722 "method": "bdev_nvme_attach_controller" 00:39:22.722 },{ 00:39:22.722 "params": { 00:39:22.722 "name": "Nvme1", 00:39:22.722 "trtype": "tcp", 00:39:22.722 "traddr": "10.0.0.2", 00:39:22.722 "adrfam": "ipv4", 00:39:22.722 "trsvcid": "4420", 00:39:22.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:22.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:22.722 "hdgst": false, 00:39:22.722 "ddgst": false 00:39:22.722 }, 00:39:22.722 "method": "bdev_nvme_attach_controller" 00:39:22.722 },{ 00:39:22.722 "params": { 00:39:22.722 "name": "Nvme2", 00:39:22.722 "trtype": "tcp", 00:39:22.722 "traddr": "10.0.0.2", 00:39:22.722 "adrfam": "ipv4", 00:39:22.722 "trsvcid": "4420", 00:39:22.722 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:22.722 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:22.722 "hdgst": false, 00:39:22.722 "ddgst": false 00:39:22.722 }, 00:39:22.722 "method": "bdev_nvme_attach_controller" 00:39:22.722 }' 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:22.722 13:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:22.980 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:22.980 ... 00:39:22.980 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:22.980 ... 00:39:22.980 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:22.980 ... 00:39:22.980 fio-3.35 00:39:22.980 Starting 24 threads 00:39:35.171 00:39:35.171 filename0: (groupid=0, jobs=1): err= 0: pid=129295: Mon Jul 15 13:23:46 2024 00:39:35.172 read: IOPS=153, BW=614KiB/s (628kB/s)(6148KiB/10020msec) 00:39:35.172 slat (usec): min=5, max=8036, avg=22.42, stdev=289.30 00:39:35.172 clat (msec): min=45, max=371, avg=103.95, stdev=41.44 00:39:35.172 lat (msec): min=45, max=371, avg=103.97, stdev=41.44 00:39:35.172 clat percentiles (msec): 00:39:35.172 | 1.00th=[ 46], 5.00th=[ 59], 10.00th=[ 70], 20.00th=[ 73], 00:39:35.172 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 106], 00:39:35.172 | 70.00th=[ 117], 80.00th=[ 129], 90.00th=[ 144], 95.00th=[ 171], 00:39:35.172 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 372], 99.95th=[ 372], 00:39:35.172 | 99.99th=[ 372] 00:39:35.172 bw ( KiB/s): min= 256, max= 808, per=3.52%, avg=606.37, stdev=162.56, samples=19 00:39:35.172 iops : min= 64, max= 202, avg=151.58, stdev=40.66, samples=19 00:39:35.172 lat (msec) : 50=2.15%, 100=53.87%, 250=42.23%, 500=1.76% 00:39:35.172 cpu : usr=31.56%, sys=1.15%, ctx=835, majf=0, minf=9 00:39:35.172 IO depths : 1=2.1%, 2=4.5%, 4=14.4%, 8=67.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:39:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 issued rwts: total=1537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.172 filename0: (groupid=0, jobs=1): err= 0: pid=129296: Mon Jul 15 13:23:46 2024 00:39:35.172 read: IOPS=198, BW=793KiB/s (812kB/s)(7976KiB/10056msec) 00:39:35.172 slat (usec): min=4, max=4044, avg=17.44, stdev=127.68 00:39:35.172 clat (msec): min=4, max=435, avg=80.47, stdev=45.28 00:39:35.172 lat (msec): min=4, max=435, avg=80.48, stdev=45.28 00:39:35.172 clat percentiles (msec): 00:39:35.172 | 1.00th=[ 5], 5.00th=[ 33], 10.00th=[ 43], 20.00th=[ 50], 00:39:35.172 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 74], 60.00th=[ 81], 00:39:35.172 | 70.00th=[ 88], 80.00th=[ 112], 90.00th=[ 132], 95.00th=[ 161], 00:39:35.172 | 99.00th=[ 245], 99.50th=[ 288], 99.90th=[ 435], 99.95th=[ 435], 00:39:35.172 | 99.99th=[ 435] 00:39:35.172 bw ( KiB/s): min= 264, max= 1654, per=4.60%, avg=790.45, stdev=331.06, samples=20 00:39:35.172 iops : min= 66, max= 413, avg=197.55, stdev=82.69, samples=20 00:39:35.172 lat (msec) : 10=3.21%, 20=1.60%, 50=15.95%, 100=55.42%, 250=23.02% 00:39:35.172 lat (msec) : 500=0.80% 00:39:35.172 cpu : usr=42.26%, sys=1.31%, ctx=1219, majf=0, minf=9 00:39:35.172 IO depths : 1=2.1%, 2=4.3%, 4=12.5%, 8=70.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:39:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 issued rwts: total=1994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.172 filename0: (groupid=0, jobs=1): err= 0: pid=129297: Mon Jul 15 13:23:46 2024 00:39:35.172 read: IOPS=199, BW=797KiB/s (816kB/s)(7984KiB/10021msec) 00:39:35.172 slat (usec): min=3, max=8025, avg=19.74, stdev=219.92 00:39:35.172 clat (msec): min=32, max=314, avg=80.21, stdev=38.23 00:39:35.172 lat (msec): min=32, max=314, avg=80.23, stdev=38.23 00:39:35.172 clat percentiles (msec): 00:39:35.172 | 1.00th=[ 33], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:39:35.172 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 83], 00:39:35.172 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 114], 95.00th=[ 136], 00:39:35.172 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 317], 99.95th=[ 317], 00:39:35.172 | 99.99th=[ 317] 00:39:35.172 bw ( KiB/s): min= 256, max= 1170, per=4.61%, avg=793.05, stdev=251.16, samples=19 00:39:35.172 iops : min= 64, max= 292, avg=198.21, stdev=62.73, samples=19 00:39:35.172 lat (msec) : 50=17.28%, 100=65.03%, 250=16.38%, 500=1.30% 00:39:35.172 cpu : usr=41.82%, sys=1.50%, ctx=1291, majf=0, minf=9 00:39:35.172 IO depths : 1=1.5%, 2=3.1%, 4=10.2%, 8=73.4%, 16=11.9%, 32=0.0%, >=64=0.0% 00:39:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.172 filename0: (groupid=0, jobs=1): err= 0: pid=129298: Mon Jul 15 13:23:46 2024 00:39:35.172 read: IOPS=185, BW=742KiB/s (759kB/s)(7432KiB/10022msec) 00:39:35.172 slat (usec): min=7, max=8038, avg=24.13, stdev=227.90 00:39:35.172 clat (msec): min=25, max=423, avg=86.12, stdev=42.72 00:39:35.172 lat (msec): min=25, max=423, avg=86.15, stdev=42.72 00:39:35.172 clat percentiles (msec): 00:39:35.172 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 59], 00:39:35.172 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 83], 00:39:35.172 | 70.00th=[ 93], 80.00th=[ 107], 90.00th=[ 127], 95.00th=[ 142], 00:39:35.172 | 99.00th=[ 262], 99.50th=[ 388], 99.90th=[ 422], 99.95th=[ 422], 00:39:35.172 | 99.99th=[ 422] 00:39:35.172 bw ( KiB/s): min= 256, max= 992, per=4.18%, avg=718.89, stdev=195.84, samples=19 00:39:35.172 iops : min= 64, max= 248, avg=179.68, stdev=48.96, samples=19 00:39:35.172 lat (msec) : 50=9.96%, 100=66.04%, 250=22.60%, 500=1.40% 00:39:35.172 cpu : usr=38.49%, sys=1.13%, ctx=1136, majf=0, minf=9 00:39:35.172 IO depths : 1=1.1%, 2=2.7%, 4=10.5%, 8=73.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:39:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 issued rwts: total=1858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.172 filename0: (groupid=0, jobs=1): err= 0: pid=129299: Mon Jul 15 13:23:46 2024 00:39:35.172 read: IOPS=193, BW=775KiB/s (793kB/s)(7776KiB/10038msec) 00:39:35.172 slat (usec): min=4, max=8029, avg=17.77, stdev=181.96 00:39:35.172 clat (msec): min=33, max=300, avg=82.48, stdev=34.84 00:39:35.172 lat (msec): min=33, max=300, avg=82.49, stdev=34.84 00:39:35.172 clat percentiles (msec): 00:39:35.172 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 60], 00:39:35.172 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:39:35.172 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 121], 95.00th=[ 142], 00:39:35.172 | 99.00th=[ 253], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 300], 00:39:35.172 | 99.99th=[ 300] 00:39:35.172 bw ( KiB/s): min= 384, max= 1040, per=4.48%, avg=771.20, stdev=184.56, samples=20 00:39:35.172 iops : min= 96, max= 260, avg=192.80, stdev=46.14, samples=20 00:39:35.172 lat (msec) : 50=12.35%, 100=71.24%, 250=15.07%, 500=1.34% 00:39:35.172 cpu : usr=31.92%, sys=1.07%, ctx=853, majf=0, minf=9 00:39:35.172 IO depths : 1=0.4%, 2=1.0%, 4=7.5%, 8=77.8%, 16=13.3%, 32=0.0%, >=64=0.0% 00:39:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 complete : 0=0.0%, 4=89.2%, 8=6.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.172 filename0: (groupid=0, jobs=1): err= 0: pid=129300: Mon Jul 15 13:23:46 2024 00:39:35.172 read: IOPS=179, BW=716KiB/s (733kB/s)(7180KiB/10027msec) 00:39:35.172 slat (usec): min=3, max=8061, avg=39.47, stdev=378.80 00:39:35.172 clat (msec): min=26, max=349, avg=89.10, stdev=41.91 00:39:35.172 lat (msec): min=26, max=349, avg=89.14, stdev=41.91 00:39:35.172 clat percentiles (msec): 00:39:35.172 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 62], 00:39:35.172 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 85], 00:39:35.172 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 124], 95.00th=[ 167], 00:39:35.172 | 99.00th=[ 249], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:39:35.172 | 99.99th=[ 351] 00:39:35.172 bw ( KiB/s): min= 256, max= 1024, per=4.10%, avg=705.79, stdev=201.44, samples=19 00:39:35.172 iops : min= 64, max= 256, avg=176.42, stdev=50.33, samples=19 00:39:35.172 lat (msec) : 50=7.97%, 100=65.85%, 250=25.29%, 500=0.89% 00:39:35.172 cpu : usr=32.17%, sys=0.87%, ctx=923, majf=0, minf=9 00:39:35.172 IO depths : 1=1.4%, 2=3.2%, 4=10.6%, 8=72.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:39:35.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.172 issued rwts: total=1795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.172 filename0: (groupid=0, jobs=1): err= 0: pid=129301: Mon Jul 15 13:23:46 2024 00:39:35.172 read: IOPS=165, BW=663KiB/s (679kB/s)(6652KiB/10027msec) 00:39:35.172 slat (usec): min=3, max=8047, avg=20.93, stdev=220.42 00:39:35.172 clat (msec): min=35, max=373, avg=96.28, stdev=45.08 00:39:35.172 lat (msec): min=35, max=373, avg=96.30, stdev=45.08 00:39:35.172 clat percentiles (msec): 00:39:35.172 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 66], 00:39:35.172 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 96], 00:39:35.172 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 167], 00:39:35.172 | 99.00th=[ 288], 99.50th=[ 359], 99.90th=[ 376], 99.95th=[ 376], 00:39:35.172 | 99.99th=[ 376] 00:39:35.172 bw ( KiB/s): min= 256, max= 992, per=3.79%, avg=652.79, stdev=190.63, samples=19 00:39:35.172 iops : min= 64, max= 248, avg=163.16, stdev=47.63, samples=19 00:39:35.172 lat (msec) : 50=8.54%, 100=55.02%, 250=33.85%, 500=2.59% 00:39:35.172 cpu : usr=33.17%, sys=1.10%, ctx=871, majf=0, minf=9 00:39:35.173 IO depths : 1=1.6%, 2=3.7%, 4=12.9%, 8=69.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:39:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 issued rwts: total=1663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.173 filename0: (groupid=0, jobs=1): err= 0: pid=129302: Mon Jul 15 13:23:46 2024 00:39:35.173 read: IOPS=156, BW=626KiB/s (641kB/s)(6272KiB/10014msec) 00:39:35.173 slat (usec): min=6, max=4060, avg=22.92, stdev=143.22 00:39:35.173 clat (msec): min=44, max=387, avg=101.97, stdev=44.64 00:39:35.173 lat (msec): min=44, max=387, avg=102.00, stdev=44.65 00:39:35.173 clat percentiles (msec): 00:39:35.173 | 1.00th=[ 48], 5.00th=[ 66], 10.00th=[ 67], 20.00th=[ 74], 00:39:35.173 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 102], 00:39:35.173 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 167], 00:39:35.173 | 99.00th=[ 334], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:39:35.173 | 99.99th=[ 388] 00:39:35.173 bw ( KiB/s): min= 256, max= 896, per=3.60%, avg=619.47, stdev=182.31, samples=19 00:39:35.173 iops : min= 64, max= 224, avg=154.84, stdev=45.57, samples=19 00:39:35.173 lat (msec) : 50=2.30%, 100=56.70%, 250=38.97%, 500=2.04% 00:39:35.173 cpu : usr=40.65%, sys=1.20%, ctx=1425, majf=0, minf=9 00:39:35.173 IO depths : 1=4.0%, 2=8.8%, 4=20.6%, 8=58.0%, 16=8.6%, 32=0.0%, >=64=0.0% 00:39:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.173 filename1: (groupid=0, jobs=1): err= 0: pid=129303: Mon Jul 15 13:23:46 2024 00:39:35.173 read: IOPS=190, BW=763KiB/s (781kB/s)(7648KiB/10027msec) 00:39:35.173 slat (usec): min=4, max=4048, avg=20.07, stdev=184.01 00:39:35.173 clat (msec): min=26, max=296, avg=83.72, stdev=39.15 00:39:35.173 lat (msec): min=26, max=296, avg=83.74, stdev=39.15 00:39:35.173 clat percentiles (msec): 00:39:35.173 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 57], 00:39:35.173 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:39:35.173 | 70.00th=[ 90], 80.00th=[ 103], 90.00th=[ 121], 95.00th=[ 157], 00:39:35.173 | 99.00th=[ 251], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 296], 00:39:35.173 | 99.99th=[ 296] 00:39:35.173 bw ( KiB/s): min= 336, max= 1168, per=4.35%, avg=748.32, stdev=231.87, samples=19 00:39:35.173 iops : min= 84, max= 292, avg=187.05, stdev=57.95, samples=19 00:39:35.173 lat (msec) : 50=11.45%, 100=67.31%, 250=19.67%, 500=1.57% 00:39:35.173 cpu : usr=41.84%, sys=1.50%, ctx=1247, majf=0, minf=9 00:39:35.173 IO depths : 1=2.2%, 2=5.2%, 4=14.6%, 8=67.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:39:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.173 filename1: (groupid=0, jobs=1): err= 0: pid=129304: Mon Jul 15 13:23:46 2024 00:39:35.173 read: IOPS=218, BW=874KiB/s (895kB/s)(8756KiB/10022msec) 00:39:35.173 slat (nsec): min=5064, max=55878, avg=11040.34, stdev=5487.37 00:39:35.173 clat (msec): min=31, max=387, avg=73.18, stdev=35.85 00:39:35.173 lat (msec): min=31, max=387, avg=73.19, stdev=35.85 00:39:35.173 clat percentiles (msec): 00:39:35.173 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 49], 00:39:35.173 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 73], 00:39:35.173 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 105], 95.00th=[ 126], 00:39:35.173 | 99.00th=[ 259], 99.50th=[ 279], 99.90th=[ 388], 99.95th=[ 388], 00:39:35.173 | 99.99th=[ 388] 00:39:35.173 bw ( KiB/s): min= 304, max= 1192, per=5.03%, avg=864.84, stdev=237.06, samples=19 00:39:35.173 iops : min= 76, max= 298, avg=216.21, stdev=59.27, samples=19 00:39:35.173 lat (msec) : 50=22.75%, 100=66.24%, 250=9.73%, 500=1.28% 00:39:35.173 cpu : usr=43.11%, sys=1.28%, ctx=1772, majf=0, minf=9 00:39:35.173 IO depths : 1=0.4%, 2=0.9%, 4=6.3%, 8=79.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:39:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 complete : 0=0.0%, 4=89.0%, 8=6.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.173 filename1: (groupid=0, jobs=1): err= 0: pid=129305: Mon Jul 15 13:23:46 2024 00:39:35.173 read: IOPS=199, BW=798KiB/s (817kB/s)(8008KiB/10035msec) 00:39:35.173 slat (usec): min=3, max=8049, avg=37.00, stdev=358.67 00:39:35.173 clat (msec): min=23, max=431, avg=79.98, stdev=46.81 00:39:35.173 lat (msec): min=23, max=431, avg=80.02, stdev=46.81 00:39:35.173 clat percentiles (msec): 00:39:35.173 | 1.00th=[ 32], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:39:35.173 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 74], 00:39:35.173 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 123], 95.00th=[ 155], 00:39:35.173 | 99.00th=[ 279], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:39:35.173 | 99.99th=[ 430] 00:39:35.173 bw ( KiB/s): min= 272, max= 1184, per=4.62%, avg=794.40, stdev=279.51, samples=20 00:39:35.173 iops : min= 68, max= 296, avg=198.60, stdev=69.88, samples=20 00:39:35.173 lat (msec) : 50=16.83%, 100=64.24%, 250=17.33%, 500=1.60% 00:39:35.173 cpu : usr=39.01%, sys=0.99%, ctx=1153, majf=0, minf=9 00:39:35.173 IO depths : 1=1.0%, 2=2.0%, 4=9.1%, 8=75.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:39:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.173 filename1: (groupid=0, jobs=1): err= 0: pid=129306: Mon Jul 15 13:23:46 2024 00:39:35.173 read: IOPS=159, BW=638KiB/s (654kB/s)(6388KiB/10009msec) 00:39:35.173 slat (usec): min=3, max=8052, avg=28.46, stdev=201.35 00:39:35.173 clat (msec): min=9, max=347, avg=100.10, stdev=41.19 00:39:35.173 lat (msec): min=9, max=347, avg=100.13, stdev=41.19 00:39:35.173 clat percentiles (msec): 00:39:35.173 | 1.00th=[ 19], 5.00th=[ 59], 10.00th=[ 71], 20.00th=[ 74], 00:39:35.173 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 94], 60.00th=[ 103], 00:39:35.173 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 132], 95.00th=[ 163], 00:39:35.173 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 347], 99.95th=[ 347], 00:39:35.173 | 99.99th=[ 347] 00:39:35.173 bw ( KiB/s): min= 256, max= 768, per=3.62%, avg=622.74, stdev=137.75, samples=19 00:39:35.173 iops : min= 64, max= 192, avg=155.68, stdev=34.44, samples=19 00:39:35.173 lat (msec) : 10=0.38%, 20=0.63%, 50=2.63%, 100=54.16%, 250=40.58% 00:39:35.173 lat (msec) : 500=1.63% 00:39:35.173 cpu : usr=31.94%, sys=1.10%, ctx=919, majf=0, minf=9 00:39:35.173 IO depths : 1=2.5%, 2=5.6%, 4=15.1%, 8=66.2%, 16=10.6%, 32=0.0%, >=64=0.0% 00:39:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 issued rwts: total=1597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.173 filename1: (groupid=0, jobs=1): err= 0: pid=129307: Mon Jul 15 13:23:46 2024 00:39:35.173 read: IOPS=166, BW=668KiB/s (684kB/s)(6688KiB/10015msec) 00:39:35.173 slat (usec): min=5, max=13054, avg=36.34, stdev=465.49 00:39:35.173 clat (msec): min=16, max=348, avg=95.53, stdev=43.12 00:39:35.173 lat (msec): min=16, max=348, avg=95.57, stdev=43.12 00:39:35.173 clat percentiles (msec): 00:39:35.173 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:39:35.173 | 30.00th=[ 74], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 93], 00:39:35.173 | 70.00th=[ 105], 80.00th=[ 117], 90.00th=[ 144], 95.00th=[ 169], 00:39:35.173 | 99.00th=[ 253], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 351], 00:39:35.173 | 99.99th=[ 351] 00:39:35.173 bw ( KiB/s): min= 256, max= 944, per=3.80%, avg=654.37, stdev=190.13, samples=19 00:39:35.173 iops : min= 64, max= 236, avg=163.58, stdev=47.53, samples=19 00:39:35.173 lat (msec) : 20=0.24%, 50=5.74%, 100=64.00%, 250=28.41%, 500=1.61% 00:39:35.173 cpu : usr=32.23%, sys=0.98%, ctx=849, majf=0, minf=9 00:39:35.173 IO depths : 1=1.7%, 2=3.8%, 4=11.4%, 8=71.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:39:35.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.173 issued rwts: total=1672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.173 filename1: (groupid=0, jobs=1): err= 0: pid=129308: Mon Jul 15 13:23:46 2024 00:39:35.173 read: IOPS=159, BW=640KiB/s (655kB/s)(6400KiB/10007msec) 00:39:35.173 slat (usec): min=6, max=8029, avg=17.76, stdev=200.65 00:39:35.173 clat (msec): min=16, max=317, avg=99.91, stdev=42.42 00:39:35.173 lat (msec): min=16, max=317, avg=99.93, stdev=42.42 00:39:35.173 clat percentiles (msec): 00:39:35.173 | 1.00th=[ 17], 5.00th=[ 56], 10.00th=[ 68], 20.00th=[ 74], 00:39:35.173 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 100], 00:39:35.173 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 140], 95.00th=[ 159], 00:39:35.173 | 99.00th=[ 292], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:39:35.173 | 99.99th=[ 317] 00:39:35.173 bw ( KiB/s): min= 256, max= 848, per=3.64%, avg=626.16, stdev=162.92, samples=19 00:39:35.173 iops : min= 64, max= 212, avg=156.53, stdev=40.71, samples=19 00:39:35.173 lat (msec) : 20=1.00%, 50=1.62%, 100=57.56%, 250=37.81%, 500=2.00% 00:39:35.173 cpu : usr=39.69%, sys=1.23%, ctx=1199, majf=0, minf=9 00:39:35.173 IO depths : 1=4.1%, 2=8.8%, 4=20.6%, 8=58.1%, 16=8.5%, 32=0.0%, >=64=0.0% 00:39:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.174 filename1: (groupid=0, jobs=1): err= 0: pid=129309: Mon Jul 15 13:23:46 2024 00:39:35.174 read: IOPS=173, BW=694KiB/s (710kB/s)(6948KiB/10016msec) 00:39:35.174 slat (usec): min=7, max=8042, avg=26.71, stdev=297.69 00:39:35.174 clat (msec): min=33, max=299, avg=92.05, stdev=33.77 00:39:35.174 lat (msec): min=33, max=300, avg=92.08, stdev=33.76 00:39:35.174 clat percentiles (msec): 00:39:35.174 | 1.00th=[ 45], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 72], 00:39:35.174 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 93], 00:39:35.174 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 127], 95.00th=[ 136], 00:39:35.174 | 99.00th=[ 249], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 300], 00:39:35.174 | 99.99th=[ 300] 00:39:35.174 bw ( KiB/s): min= 384, max= 920, per=4.01%, avg=690.11, stdev=135.03, samples=19 00:39:35.174 iops : min= 96, max= 230, avg=172.53, stdev=33.76, samples=19 00:39:35.174 lat (msec) : 50=3.91%, 100=68.45%, 250=26.71%, 500=0.92% 00:39:35.174 cpu : usr=37.87%, sys=1.20%, ctx=1161, majf=0, minf=9 00:39:35.174 IO depths : 1=1.7%, 2=3.6%, 4=11.1%, 8=71.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:39:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.174 filename1: (groupid=0, jobs=1): err= 0: pid=129310: Mon Jul 15 13:23:46 2024 00:39:35.174 read: IOPS=161, BW=644KiB/s (659kB/s)(6444KiB/10006msec) 00:39:35.174 slat (usec): min=4, max=8055, avg=45.26, stdev=399.92 00:39:35.174 clat (msec): min=9, max=435, avg=99.10, stdev=49.84 00:39:35.174 lat (msec): min=9, max=435, avg=99.14, stdev=49.83 00:39:35.174 clat percentiles (msec): 00:39:35.174 | 1.00th=[ 36], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 72], 00:39:35.174 | 30.00th=[ 75], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 99], 00:39:35.174 | 70.00th=[ 109], 80.00th=[ 122], 90.00th=[ 142], 95.00th=[ 157], 00:39:35.174 | 99.00th=[ 296], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:39:35.174 | 99.99th=[ 435] 00:39:35.174 bw ( KiB/s): min= 256, max= 976, per=3.68%, avg=632.21, stdev=197.19, samples=19 00:39:35.174 iops : min= 64, max= 244, avg=158.05, stdev=49.30, samples=19 00:39:35.174 lat (msec) : 10=0.81%, 20=0.19%, 50=4.28%, 100=56.98%, 250=35.75% 00:39:35.174 lat (msec) : 500=1.99% 00:39:35.174 cpu : usr=42.22%, sys=1.37%, ctx=1260, majf=0, minf=9 00:39:35.174 IO depths : 1=2.9%, 2=6.5%, 4=18.1%, 8=62.6%, 16=10.0%, 32=0.0%, >=64=0.0% 00:39:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 complete : 0=0.0%, 4=92.4%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 issued rwts: total=1611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.174 filename2: (groupid=0, jobs=1): err= 0: pid=129311: Mon Jul 15 13:23:46 2024 00:39:35.174 read: IOPS=158, BW=633KiB/s (648kB/s)(6336KiB/10015msec) 00:39:35.174 slat (nsec): min=4641, max=59689, avg=12592.56, stdev=5626.97 00:39:35.174 clat (msec): min=47, max=317, avg=101.02, stdev=39.93 00:39:35.174 lat (msec): min=47, max=317, avg=101.03, stdev=39.92 00:39:35.174 clat percentiles (msec): 00:39:35.174 | 1.00th=[ 51], 5.00th=[ 69], 10.00th=[ 72], 20.00th=[ 73], 00:39:35.174 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 89], 60.00th=[ 100], 00:39:35.174 | 70.00th=[ 111], 80.00th=[ 123], 90.00th=[ 142], 95.00th=[ 165], 00:39:35.174 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:39:35.174 | 99.99th=[ 317] 00:39:35.174 bw ( KiB/s): min= 256, max= 816, per=3.63%, avg=624.00, stdev=169.66, samples=19 00:39:35.174 iops : min= 64, max= 204, avg=156.00, stdev=42.42, samples=19 00:39:35.174 lat (msec) : 50=0.32%, 100=60.67%, 250=37.56%, 500=1.45% 00:39:35.174 cpu : usr=45.71%, sys=1.65%, ctx=1390, majf=0, minf=9 00:39:35.174 IO depths : 1=4.2%, 2=8.6%, 4=19.2%, 8=59.6%, 16=8.5%, 32=0.0%, >=64=0.0% 00:39:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.174 filename2: (groupid=0, jobs=1): err= 0: pid=129312: Mon Jul 15 13:23:46 2024 00:39:35.174 read: IOPS=163, BW=652KiB/s (668kB/s)(6540KiB/10027msec) 00:39:35.174 slat (usec): min=4, max=8020, avg=19.45, stdev=221.56 00:39:35.174 clat (msec): min=29, max=315, avg=97.97, stdev=39.50 00:39:35.174 lat (msec): min=29, max=315, avg=97.99, stdev=39.50 00:39:35.174 clat percentiles (msec): 00:39:35.174 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:39:35.174 | 30.00th=[ 77], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 96], 00:39:35.174 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 161], 00:39:35.174 | 99.00th=[ 255], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 317], 00:39:35.174 | 99.99th=[ 317] 00:39:35.174 bw ( KiB/s): min= 253, max= 944, per=3.73%, avg=642.63, stdev=181.64, samples=19 00:39:35.174 iops : min= 63, max= 236, avg=160.63, stdev=45.44, samples=19 00:39:35.174 lat (msec) : 50=6.30%, 100=55.84%, 250=36.02%, 500=1.83% 00:39:35.174 cpu : usr=37.45%, sys=1.13%, ctx=1026, majf=0, minf=9 00:39:35.174 IO depths : 1=0.4%, 2=1.3%, 4=7.6%, 8=76.4%, 16=14.3%, 32=0.0%, >=64=0.0% 00:39:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 complete : 0=0.0%, 4=89.3%, 8=7.3%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 issued rwts: total=1635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.174 filename2: (groupid=0, jobs=1): err= 0: pid=129313: Mon Jul 15 13:23:46 2024 00:39:35.174 read: IOPS=194, BW=778KiB/s (797kB/s)(7816KiB/10048msec) 00:39:35.174 slat (usec): min=4, max=8055, avg=28.15, stdev=362.89 00:39:35.174 clat (msec): min=3, max=386, avg=82.00, stdev=42.03 00:39:35.174 lat (msec): min=3, max=387, avg=82.02, stdev=42.03 00:39:35.174 clat percentiles (msec): 00:39:35.174 | 1.00th=[ 6], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:39:35.174 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:39:35.174 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 121], 95.00th=[ 142], 00:39:35.174 | 99.00th=[ 228], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:39:35.174 | 99.99th=[ 388] 00:39:35.174 bw ( KiB/s): min= 256, max= 1272, per=4.52%, avg=777.20, stdev=229.71, samples=20 00:39:35.174 iops : min= 64, max= 318, avg=194.30, stdev=57.43, samples=20 00:39:35.174 lat (msec) : 4=0.82%, 10=1.64%, 20=0.82%, 50=10.54%, 100=67.66% 00:39:35.174 lat (msec) : 250=17.71%, 500=0.82% 00:39:35.174 cpu : usr=31.84%, sys=0.95%, ctx=855, majf=0, minf=9 00:39:35.174 IO depths : 1=0.8%, 2=1.7%, 4=8.8%, 8=76.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:39:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 issued rwts: total=1954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.174 filename2: (groupid=0, jobs=1): err= 0: pid=129314: Mon Jul 15 13:23:46 2024 00:39:35.174 read: IOPS=184, BW=739KiB/s (757kB/s)(7424KiB/10044msec) 00:39:35.174 slat (usec): min=5, max=8026, avg=34.57, stdev=332.17 00:39:35.174 clat (msec): min=19, max=307, avg=86.27, stdev=34.71 00:39:35.174 lat (msec): min=19, max=307, avg=86.30, stdev=34.72 00:39:35.174 clat percentiles (msec): 00:39:35.174 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 63], 00:39:35.174 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 85], 00:39:35.174 | 70.00th=[ 94], 80.00th=[ 101], 90.00th=[ 116], 95.00th=[ 144], 00:39:35.174 | 99.00th=[ 224], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 309], 00:39:35.174 | 99.99th=[ 309] 00:39:35.174 bw ( KiB/s): min= 432, max= 1024, per=4.28%, avg=736.00, stdev=160.61, samples=20 00:39:35.174 iops : min= 108, max= 256, avg=184.00, stdev=40.15, samples=20 00:39:35.174 lat (msec) : 20=0.11%, 50=8.57%, 100=71.39%, 250=19.07%, 500=0.86% 00:39:35.174 cpu : usr=39.43%, sys=1.13%, ctx=1202, majf=0, minf=9 00:39:35.174 IO depths : 1=1.8%, 2=4.0%, 4=12.6%, 8=70.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:39:35.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 complete : 0=0.0%, 4=90.5%, 8=4.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.174 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.174 filename2: (groupid=0, jobs=1): err= 0: pid=129315: Mon Jul 15 13:23:46 2024 00:39:35.174 read: IOPS=171, BW=687KiB/s (703kB/s)(6880KiB/10019msec) 00:39:35.174 slat (usec): min=4, max=8047, avg=30.42, stdev=386.75 00:39:35.174 clat (msec): min=34, max=311, avg=92.95, stdev=37.48 00:39:35.174 lat (msec): min=34, max=311, avg=92.98, stdev=37.47 00:39:35.174 clat percentiles (msec): 00:39:35.174 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 57], 20.00th=[ 70], 00:39:35.174 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 94], 00:39:35.174 | 70.00th=[ 104], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 148], 00:39:35.175 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 00:39:35.175 | 99.99th=[ 313] 00:39:35.175 bw ( KiB/s): min= 384, max= 1024, per=3.97%, avg=683.79, stdev=167.99, samples=19 00:39:35.175 iops : min= 96, max= 256, avg=170.95, stdev=42.00, samples=19 00:39:35.175 lat (msec) : 50=6.86%, 100=62.73%, 250=29.48%, 500=0.93% 00:39:35.175 cpu : usr=31.59%, sys=1.15%, ctx=848, majf=0, minf=9 00:39:35.175 IO depths : 1=1.7%, 2=3.6%, 4=12.3%, 8=70.9%, 16=11.5%, 32=0.0%, >=64=0.0% 00:39:35.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.175 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.175 issued rwts: total=1720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.175 filename2: (groupid=0, jobs=1): err= 0: pid=129316: Mon Jul 15 13:23:46 2024 00:39:35.175 read: IOPS=185, BW=743KiB/s (761kB/s)(7452KiB/10026msec) 00:39:35.175 slat (usec): min=7, max=8044, avg=23.96, stdev=229.08 00:39:35.175 clat (msec): min=38, max=298, avg=85.84, stdev=33.60 00:39:35.175 lat (msec): min=38, max=298, avg=85.87, stdev=33.59 00:39:35.175 clat percentiles (msec): 00:39:35.175 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 63], 00:39:35.175 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 85], 00:39:35.175 | 70.00th=[ 92], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 134], 00:39:35.175 | 99.00th=[ 253], 99.50th=[ 284], 99.90th=[ 300], 99.95th=[ 300], 00:39:35.175 | 99.99th=[ 300] 00:39:35.175 bw ( KiB/s): min= 384, max= 944, per=4.31%, avg=741.21, stdev=157.18, samples=19 00:39:35.175 iops : min= 96, max= 236, avg=185.26, stdev=39.29, samples=19 00:39:35.175 lat (msec) : 50=6.33%, 100=72.57%, 250=19.91%, 500=1.18% 00:39:35.175 cpu : usr=40.06%, sys=1.03%, ctx=1307, majf=0, minf=9 00:39:35.175 IO depths : 1=1.5%, 2=3.2%, 4=10.5%, 8=72.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:39:35.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.175 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.175 issued rwts: total=1863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.175 filename2: (groupid=0, jobs=1): err= 0: pid=129317: Mon Jul 15 13:23:46 2024 00:39:35.175 read: IOPS=198, BW=794KiB/s (813kB/s)(7952KiB/10018msec) 00:39:35.175 slat (usec): min=4, max=8041, avg=21.31, stdev=232.26 00:39:35.175 clat (msec): min=26, max=295, avg=80.49, stdev=34.34 00:39:35.175 lat (msec): min=26, max=295, avg=80.51, stdev=34.33 00:39:35.175 clat percentiles (msec): 00:39:35.175 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:39:35.175 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 84], 00:39:35.175 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 115], 95.00th=[ 127], 00:39:35.175 | 99.00th=[ 247], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:39:35.175 | 99.99th=[ 296] 00:39:35.175 bw ( KiB/s): min= 384, max= 1024, per=4.58%, avg=788.80, stdev=164.19, samples=20 00:39:35.175 iops : min= 96, max= 256, avg=197.20, stdev=41.05, samples=20 00:39:35.175 lat (msec) : 50=14.74%, 100=68.26%, 250=16.20%, 500=0.80% 00:39:35.175 cpu : usr=39.95%, sys=1.45%, ctx=1300, majf=0, minf=9 00:39:35.175 IO depths : 1=2.1%, 2=4.4%, 4=13.1%, 8=69.7%, 16=10.8%, 32=0.0%, >=64=0.0% 00:39:35.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.175 complete : 0=0.0%, 4=90.7%, 8=4.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.175 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.175 filename2: (groupid=0, jobs=1): err= 0: pid=129318: Mon Jul 15 13:23:46 2024 00:39:35.175 read: IOPS=193, BW=774KiB/s (792kB/s)(7780KiB/10056msec) 00:39:35.175 slat (usec): min=3, max=8066, avg=46.42, stdev=401.65 00:39:35.175 clat (msec): min=7, max=370, avg=82.40, stdev=43.38 00:39:35.175 lat (msec): min=7, max=370, avg=82.45, stdev=43.39 00:39:35.175 clat percentiles (msec): 00:39:35.175 | 1.00th=[ 8], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:39:35.175 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 84], 00:39:35.175 | 70.00th=[ 87], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 142], 00:39:35.175 | 99.00th=[ 284], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:39:35.175 | 99.99th=[ 372] 00:39:35.175 bw ( KiB/s): min= 256, max= 1502, per=4.48%, avg=771.10, stdev=267.42, samples=20 00:39:35.175 iops : min= 64, max= 375, avg=192.75, stdev=66.78, samples=20 00:39:35.175 lat (msec) : 10=3.29%, 50=11.77%, 100=63.44%, 250=19.28%, 500=2.21% 00:39:35.175 cpu : usr=32.21%, sys=0.94%, ctx=927, majf=0, minf=9 00:39:35.175 IO depths : 1=0.8%, 2=1.6%, 4=8.1%, 8=76.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:39:35.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.175 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.175 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:35.175 00:39:35.175 Run status group 0 (all jobs): 00:39:35.175 READ: bw=16.8MiB/s (17.6MB/s), 614KiB/s-874KiB/s (628kB/s-895kB/s), io=169MiB (177MB), run=10006-10056msec 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 bdev_null0 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.175 [2024-07-15 13:23:46.357576] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:35.175 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.176 bdev_null1 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # config=() 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # local subsystem config 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:39:35.176 { 00:39:35.176 "params": { 00:39:35.176 "name": "Nvme$subsystem", 00:39:35.176 "trtype": "$TEST_TRANSPORT", 00:39:35.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:35.176 "adrfam": "ipv4", 00:39:35.176 "trsvcid": "$NVMF_PORT", 00:39:35.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:35.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:35.176 "hdgst": ${hdgst:-false}, 00:39:35.176 "ddgst": ${ddgst:-false} 00:39:35.176 }, 00:39:35.176 "method": "bdev_nvme_attach_controller" 00:39:35.176 } 00:39:35.176 EOF 00:39:35.176 )") 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:39:35.176 { 00:39:35.176 "params": { 00:39:35.176 "name": "Nvme$subsystem", 00:39:35.176 "trtype": "$TEST_TRANSPORT", 00:39:35.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:35.176 "adrfam": "ipv4", 00:39:35.176 "trsvcid": "$NVMF_PORT", 00:39:35.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:35.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:35.176 "hdgst": ${hdgst:-false}, 00:39:35.176 "ddgst": ${ddgst:-false} 00:39:35.176 }, 00:39:35.176 "method": "bdev_nvme_attach_controller" 00:39:35.176 } 00:39:35.176 EOF 00:39:35.176 )") 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # jq . 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@561 -- # IFS=, 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:39:35.176 "params": { 00:39:35.176 "name": "Nvme0", 00:39:35.176 "trtype": "tcp", 00:39:35.176 "traddr": "10.0.0.2", 00:39:35.176 "adrfam": "ipv4", 00:39:35.176 "trsvcid": "4420", 00:39:35.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:35.176 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:35.176 "hdgst": false, 00:39:35.176 "ddgst": false 00:39:35.176 }, 00:39:35.176 "method": "bdev_nvme_attach_controller" 00:39:35.176 },{ 00:39:35.176 "params": { 00:39:35.176 "name": "Nvme1", 00:39:35.176 "trtype": "tcp", 00:39:35.176 "traddr": "10.0.0.2", 00:39:35.176 "adrfam": "ipv4", 00:39:35.176 "trsvcid": "4420", 00:39:35.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:35.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:35.176 "hdgst": false, 00:39:35.176 "ddgst": false 00:39:35.176 }, 00:39:35.176 "method": "bdev_nvme_attach_controller" 00:39:35.176 }' 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:35.176 13:23:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:35.176 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:35.176 ... 00:39:35.176 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:35.176 ... 00:39:35.176 fio-3.35 00:39:35.176 Starting 4 threads 00:39:40.432 00:39:40.432 filename0: (groupid=0, jobs=1): err= 0: pid=129436: Mon Jul 15 13:23:52 2024 00:39:40.432 read: IOPS=1855, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5002msec) 00:39:40.432 slat (nsec): min=7862, max=47024, avg=15790.37, stdev=4424.68 00:39:40.432 clat (usec): min=1693, max=9701, avg=4235.51, stdev=468.53 00:39:40.432 lat (usec): min=1705, max=9711, avg=4251.30, stdev=468.41 00:39:40.432 clat percentiles (usec): 00:39:40.432 | 1.00th=[ 2966], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4113], 00:39:40.432 | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:39:40.432 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4883], 00:39:40.432 | 99.00th=[ 6325], 99.50th=[ 6915], 99.90th=[ 7767], 99.95th=[ 8717], 00:39:40.432 | 99.99th=[ 9765] 00:39:40.432 bw ( KiB/s): min=14733, max=15104, per=25.09%, avg=14922.33, stdev=165.16, samples=9 00:39:40.432 iops : min= 1841, max= 1888, avg=1865.22, stdev=20.74, samples=9 00:39:40.432 lat (msec) : 2=0.26%, 4=3.07%, 10=96.67% 00:39:40.432 cpu : usr=93.36%, sys=5.28%, ctx=14, majf=0, minf=9 00:39:40.432 IO depths : 1=8.2%, 2=25.0%, 4=50.0%, 8=16.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:40.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.432 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.432 issued rwts: total=9280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:40.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:40.432 filename0: (groupid=0, jobs=1): err= 0: pid=129437: Mon Jul 15 13:23:52 2024 00:39:40.432 read: IOPS=1858, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5002msec) 00:39:40.432 slat (nsec): min=7963, max=51282, avg=14363.52, stdev=5735.70 00:39:40.432 clat (usec): min=1898, max=6754, avg=4241.21, stdev=292.44 00:39:40.432 lat (usec): min=1912, max=6770, avg=4255.57, stdev=291.68 00:39:40.432 clat percentiles (usec): 00:39:40.432 | 1.00th=[ 3326], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4146], 00:39:40.432 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4178], 60.00th=[ 4228], 00:39:40.432 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4621], 00:39:40.432 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 5932], 99.95th=[ 6390], 00:39:40.432 | 99.99th=[ 6783] 00:39:40.432 bw ( KiB/s): min=14720, max=15104, per=25.14%, avg=14950.78, stdev=137.63, samples=9 00:39:40.432 iops : min= 1840, max= 1888, avg=1868.78, stdev=17.25, samples=9 00:39:40.432 lat (msec) : 2=0.01%, 4=2.50%, 10=97.49% 00:39:40.432 cpu : usr=93.90%, sys=4.84%, ctx=14, majf=0, minf=9 00:39:40.432 IO depths : 1=9.8%, 2=25.0%, 4=50.0%, 8=15.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:40.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.432 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.432 issued rwts: total=9296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:40.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:40.432 filename1: (groupid=0, jobs=1): err= 0: pid=129438: Mon Jul 15 13:23:52 2024 00:39:40.432 read: IOPS=1858, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5003msec) 00:39:40.432 slat (nsec): min=7885, max=61951, avg=16114.64, stdev=3946.86 00:39:40.432 clat (usec): min=1844, max=7045, avg=4229.04, stdev=302.15 00:39:40.432 lat (usec): min=1856, max=7058, avg=4245.16, stdev=302.13 00:39:40.432 clat percentiles (usec): 00:39:40.432 | 1.00th=[ 3261], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4113], 00:39:40.432 | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:39:40.432 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4686], 00:39:40.432 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 6194], 99.95th=[ 6718], 00:39:40.432 | 99.99th=[ 7046] 00:39:40.432 bw ( KiB/s): min=14720, max=15104, per=25.13%, avg=14947.56, stdev=139.89, samples=9 00:39:40.432 iops : min= 1840, max= 1888, avg=1868.44, stdev=17.49, samples=9 00:39:40.432 lat (msec) : 2=0.03%, 4=2.14%, 10=97.83% 00:39:40.432 cpu : usr=93.50%, sys=5.24%, ctx=7, majf=0, minf=0 00:39:40.432 IO depths : 1=9.5%, 2=25.0%, 4=50.0%, 8=15.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:40.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.432 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.432 issued rwts: total=9296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:40.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:40.432 filename1: (groupid=0, jobs=1): err= 0: pid=129439: Mon Jul 15 13:23:52 2024 00:39:40.432 read: IOPS=1863, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5004msec) 00:39:40.432 slat (nsec): min=3774, max=50927, avg=9846.27, stdev=3529.07 00:39:40.432 clat (usec): min=1235, max=6094, avg=4249.58, stdev=289.87 00:39:40.432 lat (usec): min=1243, max=6117, avg=4259.43, stdev=290.22 00:39:40.432 clat percentiles (usec): 00:39:40.432 | 1.00th=[ 3982], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4146], 00:39:40.432 | 30.00th=[ 4178], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:39:40.432 | 70.00th=[ 4293], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4490], 00:39:40.432 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 5997], 99.95th=[ 6063], 00:39:40.432 | 99.99th=[ 6063] 00:39:40.432 bw ( KiB/s): min=14720, max=15264, per=25.22%, avg=14995.78, stdev=172.40, samples=9 00:39:40.432 iops : min= 1840, max= 1908, avg=1874.44, stdev=21.58, samples=9 00:39:40.432 lat (msec) : 2=0.21%, 4=0.82%, 10=98.97% 00:39:40.432 cpu : usr=93.12%, sys=5.52%, ctx=8, majf=0, minf=0 00:39:40.432 IO depths : 1=5.0%, 2=17.5%, 4=57.4%, 8=20.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:40.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.432 complete : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.432 issued rwts: total=9325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:40.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:40.432 00:39:40.432 Run status group 0 (all jobs): 00:39:40.432 READ: bw=58.1MiB/s (60.9MB/s), 14.5MiB/s-14.6MiB/s (15.2MB/s-15.3MB/s), io=291MiB (305MB), run=5002-5004msec 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:40.432 ************************************ 00:39:40.432 END TEST fio_dif_rand_params 00:39:40.432 ************************************ 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.432 00:39:40.432 real 0m23.374s 00:39:40.432 user 2m4.895s 00:39:40.432 sys 0m5.587s 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:40.432 13:23:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:40.432 13:23:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:40.432 13:23:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:40.432 13:23:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:40.432 13:23:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:40.432 13:23:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:40.432 ************************************ 00:39:40.432 START TEST fio_dif_digest 00:39:40.432 ************************************ 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:40.432 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:40.433 bdev_null0 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:40.433 [2024-07-15 13:23:52.472986] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@536 -- # config=() 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@536 -- # local subsystem config 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:39:40.433 { 00:39:40.433 "params": { 00:39:40.433 "name": "Nvme$subsystem", 00:39:40.433 "trtype": "$TEST_TRANSPORT", 00:39:40.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.433 "adrfam": "ipv4", 00:39:40.433 "trsvcid": "$NVMF_PORT", 00:39:40.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.433 "hdgst": ${hdgst:-false}, 00:39:40.433 "ddgst": ${ddgst:-false} 00:39:40.433 }, 00:39:40.433 "method": "bdev_nvme_attach_controller" 00:39:40.433 } 00:39:40.433 EOF 00:39:40.433 )") 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # cat 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # jq . 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@561 -- # IFS=, 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:39:40.433 "params": { 00:39:40.433 "name": "Nvme0", 00:39:40.433 "trtype": "tcp", 00:39:40.433 "traddr": "10.0.0.2", 00:39:40.433 "adrfam": "ipv4", 00:39:40.433 "trsvcid": "4420", 00:39:40.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:40.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:40.433 "hdgst": true, 00:39:40.433 "ddgst": true 00:39:40.433 }, 00:39:40.433 "method": "bdev_nvme_attach_controller" 00:39:40.433 }' 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:40.433 13:23:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:40.433 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:40.433 ... 00:39:40.433 fio-3.35 00:39:40.433 Starting 3 threads 00:39:52.674 00:39:52.674 filename0: (groupid=0, jobs=1): err= 0: pid=129540: Mon Jul 15 13:24:03 2024 00:39:52.674 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(270MiB/10043msec) 00:39:52.674 slat (nsec): min=4709, max=68626, avg=15686.90, stdev=5804.43 00:39:52.674 clat (usec): min=10631, max=59224, avg=13935.84, stdev=4331.87 00:39:52.674 lat (usec): min=10653, max=59241, avg=13951.53, stdev=4332.52 00:39:52.674 clat percentiles (usec): 00:39:52.674 | 1.00th=[11600], 5.00th=[11994], 10.00th=[12387], 20.00th=[12649], 00:39:52.674 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:39:52.674 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14877], 95.00th=[16057], 00:39:52.674 | 99.00th=[50594], 99.50th=[53740], 99.90th=[56886], 99.95th=[58459], 00:39:52.674 | 99.99th=[58983] 00:39:52.674 bw ( KiB/s): min=19968, max=29696, per=37.59%, avg=27568.45, stdev=2315.93, samples=20 00:39:52.674 iops : min= 156, max= 232, avg=215.35, stdev=18.10, samples=20 00:39:52.674 lat (msec) : 20=98.65%, 50=0.32%, 100=1.02% 00:39:52.674 cpu : usr=92.26%, sys=6.24%, ctx=14, majf=0, minf=0 00:39:52.674 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:52.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.674 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:52.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:52.674 filename0: (groupid=0, jobs=1): err= 0: pid=129541: Mon Jul 15 13:24:03 2024 00:39:52.674 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10046msec) 00:39:52.674 slat (nsec): min=8159, max=60360, avg=14358.75, stdev=4838.37 00:39:52.674 clat (usec): min=8000, max=64443, avg=14805.54, stdev=3357.42 00:39:52.674 lat (usec): min=8021, max=64452, avg=14819.90, stdev=3357.52 00:39:52.674 clat percentiles (usec): 00:39:52.674 | 1.00th=[ 8979], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:39:52.674 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:39:52.674 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16450], 95.00th=[17171], 00:39:52.674 | 99.00th=[20317], 99.50th=[43779], 99.90th=[60031], 99.95th=[64226], 00:39:52.674 | 99.99th=[64226] 00:39:52.674 bw ( KiB/s): min=17664, max=28160, per=35.29%, avg=25883.45, stdev=2251.28, samples=20 00:39:52.674 iops : min= 138, max= 220, avg=202.20, stdev=17.59, samples=20 00:39:52.674 lat (msec) : 10=2.32%, 20=96.35%, 50=1.03%, 100=0.30% 00:39:52.674 cpu : usr=92.60%, sys=5.99%, ctx=6, majf=0, minf=9 00:39:52.674 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:52.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.674 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:52.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:52.674 filename0: (groupid=0, jobs=1): err= 0: pid=129542: Mon Jul 15 13:24:03 2024 00:39:52.674 read: IOPS=156, BW=19.6MiB/s (20.6MB/s)(196MiB/10003msec) 00:39:52.674 slat (nsec): min=8261, max=46706, avg=16519.39, stdev=5755.92 00:39:52.674 clat (usec): min=10420, max=64505, avg=19089.75, stdev=3404.91 00:39:52.674 lat (usec): min=10440, max=64522, avg=19106.27, stdev=3405.64 00:39:52.674 clat percentiles (usec): 00:39:52.674 | 1.00th=[11207], 5.00th=[16712], 10.00th=[17433], 20.00th=[17957], 00:39:52.674 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:39:52.674 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20579], 95.00th=[21627], 00:39:52.674 | 99.00th=[25035], 99.50th=[52691], 99.90th=[61604], 99.95th=[64750], 00:39:52.674 | 99.99th=[64750] 00:39:52.674 bw ( KiB/s): min=14592, max=22016, per=27.37%, avg=20071.42, stdev=1496.45, samples=19 00:39:52.674 iops : min= 114, max= 172, avg=156.79, stdev=11.68, samples=19 00:39:52.674 lat (msec) : 20=82.48%, 50=16.94%, 100=0.57% 00:39:52.674 cpu : usr=93.08%, sys=5.56%, ctx=19, majf=0, minf=9 00:39:52.674 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:52.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.674 issued rwts: total=1570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:52.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:52.674 00:39:52.674 Run status group 0 (all jobs): 00:39:52.674 READ: bw=71.6MiB/s (75.1MB/s), 19.6MiB/s-26.8MiB/s (20.6MB/s-28.1MB/s), io=720MiB (754MB), run=10003-10046msec 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:52.674 ************************************ 00:39:52.674 END TEST fio_dif_digest 00:39:52.674 ************************************ 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.674 00:39:52.674 real 0m10.911s 00:39:52.674 user 0m28.474s 00:39:52.674 sys 0m2.019s 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:52.674 13:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:52.674 13:24:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:52.674 13:24:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@492 -- # nvmfcleanup 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:52.674 rmmod nvme_tcp 00:39:52.674 rmmod nvme_fabrics 00:39:52.674 rmmod nvme_keyring 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@493 -- # '[' -n 128836 ']' 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@494 -- # killprocess 128836 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 128836 ']' 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 128836 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128836 00:39:52.674 killing process with pid 128836 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128836' 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@967 -- # kill 128836 00:39:52.674 13:24:03 nvmf_dif -- common/autotest_common.sh@972 -- # wait 128836 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@496 -- # '[' iso == iso ']' 00:39:52.674 13:24:03 nvmf_dif -- nvmf/common.sh@497 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:52.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:52.675 Waiting for block devices as requested 00:39:52.675 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:39:52.675 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:52.675 13:24:04 nvmf_dif -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:39:52.675 13:24:04 nvmf_dif -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:39:52.675 13:24:04 nvmf_dif -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:52.675 13:24:04 nvmf_dif -- nvmf/common.sh@282 -- # remove_spdk_ns 00:39:52.675 13:24:04 nvmf_dif -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.675 13:24:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:52.675 13:24:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.675 13:24:04 nvmf_dif -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:39:52.675 00:39:52.675 real 0m58.603s 00:39:52.675 user 3m46.873s 00:39:52.675 sys 0m15.943s 00:39:52.675 13:24:04 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:52.675 13:24:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:52.675 ************************************ 00:39:52.675 END TEST nvmf_dif 00:39:52.675 ************************************ 00:39:52.675 13:24:04 -- common/autotest_common.sh@1142 -- # return 0 00:39:52.675 13:24:04 -- spdk/autotest.sh@296 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:52.675 13:24:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:52.675 13:24:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:52.675 13:24:04 -- common/autotest_common.sh@10 -- # set +x 00:39:52.675 ************************************ 00:39:52.675 START TEST nvmf_abort_qd_sizes 00:39:52.675 ************************************ 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:52.675 * Looking for test storage... 00:39:52.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:52.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@452 -- # prepare_net_devs 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # local -g is_hw=no 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # remove_spdk_ns 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ virt != virt ]] 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # [[ no == yes ]] 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # [[ virt == phy ]] 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # [[ virt == phy-fallback ]] 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@435 -- # [[ tcp == tcp ]] 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # nvmf_veth_init 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_BRIDGE=nvmf_br 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br nomaster 00:39:52.675 Cannot find device "nvmf_tgt_br" 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br2 nomaster 00:39:52.675 Cannot find device "nvmf_tgt_br2" 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # true 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link set nvmf_init_br down 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_tgt_br down 00:39:52.675 Cannot find device "nvmf_tgt_br" 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_tgt_br2 down 00:39:52.675 Cannot find device "nvmf_tgt_br2" 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link delete nvmf_br type bridge 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link delete nvmf_init_if 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:52.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:52.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip netns add nvmf_tgt_ns_spdk 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_init_if up 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip link set nvmf_init_br up 00:39:52.675 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip link set nvmf_tgt_br up 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip link set nvmf_tgt_br2 up 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link add nvmf_br type bridge 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_br up 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_init_br master nvmf_br 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # ping -c 1 10.0.0.2 00:39:52.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:52.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:39:52.676 00:39:52.676 --- 10.0.0.2 ping statistics --- 00:39:52.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:52.676 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@210 -- # ping -c 1 10.0.0.3 00:39:52.676 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:52.676 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:39:52.676 00:39:52.676 --- 10.0.0.3 ping statistics --- 00:39:52.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:52.676 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:52.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:52.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:39:52.676 00:39:52.676 --- 10.0.0.1 ping statistics --- 00:39:52.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:52.676 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@437 -- # return 0 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # '[' iso == iso ']' 00:39:52.676 13:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:53.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:53.261 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:53.261 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@485 -- # nvmfpid=130117 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@486 -- # waitforlisten 130117 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 130117 ']' 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:53.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:53.261 13:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:53.519 [2024-07-15 13:24:05.747531] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:39:53.519 [2024-07-15 13:24:05.747643] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:53.519 [2024-07-15 13:24:05.888159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:53.519 [2024-07-15 13:24:05.961869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:53.519 [2024-07-15 13:24:05.961933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:53.519 [2024-07-15 13:24:05.961947] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:53.519 [2024-07-15 13:24:05.961958] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:53.519 [2024-07-15 13:24:05.961967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:53.519 [2024-07-15 13:24:05.962097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:53.519 [2024-07-15 13:24:05.962175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:53.519 [2024-07-15 13:24:05.962678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:53.519 [2024-07-15 13:24:05.962717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:54.450 13:24:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:54.450 ************************************ 00:39:54.450 START TEST spdk_target_abort 00:39:54.450 ************************************ 00:39:54.450 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:39:54.450 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:54.450 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:39:54.450 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:54.450 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:54.707 spdk_targetn1 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:54.707 [2024-07-15 13:24:06.969586] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:54.707 13:24:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:54.707 [2024-07-15 13:24:06.997744] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:54.707 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:54.708 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:54.708 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:54.708 13:24:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:57.985 Initializing NVMe Controllers 00:39:57.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:57.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:57.985 Initialization complete. Launching workers. 00:39:57.985 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11339, failed: 0 00:39:57.985 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1062, failed to submit 10277 00:39:57.985 success 767, unsuccess 295, failed 0 00:39:57.985 13:24:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:57.985 13:24:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:01.265 Initializing NVMe Controllers 00:40:01.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:01.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:01.265 Initialization complete. Launching workers. 00:40:01.265 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5948, failed: 0 00:40:01.265 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 4732 00:40:01.265 success 244, unsuccess 972, failed 0 00:40:01.265 13:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:01.265 13:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:04.584 Initializing NVMe Controllers 00:40:04.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:04.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:04.584 Initialization complete. Launching workers. 00:40:04.584 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29066, failed: 0 00:40:04.584 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2532, failed to submit 26534 00:40:04.584 success 371, unsuccess 2161, failed 0 00:40:04.584 13:24:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:04.584 13:24:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:04.584 13:24:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:04.584 13:24:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:04.584 13:24:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:04.584 13:24:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:04.584 13:24:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 130117 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 130117 ']' 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 130117 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130117 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:05.518 killing process with pid 130117 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130117' 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 130117 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 130117 00:40:05.518 00:40:05.518 real 0m11.035s 00:40:05.518 user 0m43.820s 00:40:05.518 sys 0m1.911s 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:05.518 ************************************ 00:40:05.518 END TEST spdk_target_abort 00:40:05.518 ************************************ 00:40:05.518 13:24:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:40:05.518 13:24:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:05.518 13:24:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:05.518 13:24:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:05.518 13:24:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:05.518 ************************************ 00:40:05.518 START TEST kernel_target_abort 00:40:05.518 ************************************ 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # local ip 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@746 -- # ip_candidates=() 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@746 -- # local -A ip_candidates 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@638 -- # nvmet=/sys/kernel/config/nvmet 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@640 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@643 -- # local block nvme 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ ! -e /sys/module/nvmet ]] 00:40:05.518 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@646 -- # modprobe nvmet 00:40:05.775 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@649 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:05.775 13:24:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:06.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:06.033 Waiting for block devices as requested 00:40:06.033 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:40:06.033 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:06.033 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:40:06.033 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:06.033 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # is_block_zoned nvme0n1 00:40:06.033 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:40:06.033 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:06.033 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:06.033 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # block_in_use nvme0n1 00:40:06.033 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:40:06.033 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:40:06.290 No valid GPT data, bailing 00:40:06.290 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:06.290 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:06.290 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n1 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n2 ]] 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # is_block_zoned nvme0n2 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # block_in_use nvme0n2 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:40:06.291 No valid GPT data, bailing 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n2 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n3 ]] 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # is_block_zoned nvme0n3 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # block_in_use nvme0n3 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:40:06.291 No valid GPT data, bailing 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n3 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme1n1 ]] 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # is_block_zoned nvme1n1 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # block_in_use nvme1n1 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:40:06.291 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:40:06.291 No valid GPT data, bailing 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # nvme=/dev/nvme1n1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # [[ -b /dev/nvme1n1 ]] 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo /dev/nvme1n1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # echo 10.0.0.1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # echo tcp 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # echo 4420 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # echo ipv4 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a --hostid=2cb7827d-6d51-45f2-a3fc-4dedac25353a -a 10.0.0.1 -t tcp -s 4420 00:40:06.549 00:40:06.549 Discovery Log Number of Records 2, Generation counter 2 00:40:06.549 =====Discovery Log Entry 0====== 00:40:06.549 trtype: tcp 00:40:06.549 adrfam: ipv4 00:40:06.549 subtype: current discovery subsystem 00:40:06.549 treq: not specified, sq flow control disable supported 00:40:06.549 portid: 1 00:40:06.549 trsvcid: 4420 00:40:06.549 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:06.549 traddr: 10.0.0.1 00:40:06.549 eflags: none 00:40:06.549 sectype: none 00:40:06.549 =====Discovery Log Entry 1====== 00:40:06.549 trtype: tcp 00:40:06.549 adrfam: ipv4 00:40:06.549 subtype: nvme subsystem 00:40:06.549 treq: not specified, sq flow control disable supported 00:40:06.549 portid: 1 00:40:06.549 trsvcid: 4420 00:40:06.549 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:06.549 traddr: 10.0.0.1 00:40:06.549 eflags: none 00:40:06.549 sectype: none 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:06.549 13:24:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:09.843 Initializing NVMe Controllers 00:40:09.843 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:09.843 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:09.843 Initialization complete. Launching workers. 00:40:09.843 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33722, failed: 0 00:40:09.843 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33722, failed to submit 0 00:40:09.843 success 0, unsuccess 33722, failed 0 00:40:09.843 13:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:09.843 13:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:13.151 Initializing NVMe Controllers 00:40:13.151 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:13.151 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:13.151 Initialization complete. Launching workers. 00:40:13.151 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67180, failed: 0 00:40:13.151 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28317, failed to submit 38863 00:40:13.151 success 0, unsuccess 28317, failed 0 00:40:13.151 13:24:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:13.151 13:24:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:16.433 Initializing NVMe Controllers 00:40:16.433 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:16.433 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:16.433 Initialization complete. Launching workers. 00:40:16.433 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77171, failed: 0 00:40:16.433 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19258, failed to submit 57913 00:40:16.433 success 0, unsuccess 19258, failed 0 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # echo 0 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # modules=(/sys/module/nvmet/holders/*) 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # modprobe -r nvmet_tcp nvmet 00:40:16.433 13:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:16.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:18.063 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:18.063 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:40:18.321 00:40:18.321 real 0m12.592s 00:40:18.321 user 0m6.207s 00:40:18.321 sys 0m3.729s 00:40:18.322 13:24:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:18.322 ************************************ 00:40:18.322 END TEST kernel_target_abort 00:40:18.322 ************************************ 00:40:18.322 13:24:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # nvmfcleanup 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:18.322 rmmod nvme_tcp 00:40:18.322 rmmod nvme_fabrics 00:40:18.322 rmmod nvme_keyring 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # '[' -n 130117 ']' 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # killprocess 130117 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 130117 ']' 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 130117 00:40:18.322 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (130117) - No such process 00:40:18.322 Process with pid 130117 is not found 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 130117 is not found' 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' iso == iso ']' 00:40:18.322 13:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@497 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:18.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:18.599 Waiting for block devices as requested 00:40:18.599 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:40:18.857 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- nvmf/common.sh@282 -- # remove_spdk_ns 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip -4 addr flush nvmf_init_if 00:40:18.857 00:40:18.857 real 0m26.924s 00:40:18.857 user 0m51.241s 00:40:18.857 sys 0m6.972s 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:18.857 13:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:18.857 ************************************ 00:40:18.857 END TEST nvmf_abort_qd_sizes 00:40:18.857 ************************************ 00:40:18.857 13:24:31 -- common/autotest_common.sh@1142 -- # return 0 00:40:18.857 13:24:31 -- spdk/autotest.sh@298 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:40:18.857 13:24:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:18.857 13:24:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:18.857 13:24:31 -- common/autotest_common.sh@10 -- # set +x 00:40:18.857 ************************************ 00:40:18.857 START TEST keyring_file 00:40:18.857 ************************************ 00:40:18.857 13:24:31 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:40:19.115 * Looking for test storage... 00:40:19.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:40:19.115 13:24:31 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:40:19.115 13:24:31 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:19.115 13:24:31 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:19.115 13:24:31 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:19.115 13:24:31 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:19.115 13:24:31 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:19.115 13:24:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.116 13:24:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.116 13:24:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.116 13:24:31 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:19.116 13:24:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:19.116 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dDkF3mTluY 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@706 -- # local prefix key digest 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@708 -- # digest=0 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@709 -- # python - 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dDkF3mTluY 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dDkF3mTluY 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.dDkF3mTluY 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4JyVFMGr83 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@706 -- # local prefix key digest 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@708 -- # key=112233445566778899aabbccddeeff00 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@708 -- # digest=0 00:40:19.116 13:24:31 keyring_file -- nvmf/common.sh@709 -- # python - 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4JyVFMGr83 00:40:19.116 13:24:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4JyVFMGr83 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.4JyVFMGr83 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=130982 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 130982 00:40:19.116 13:24:31 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:19.116 13:24:31 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 130982 ']' 00:40:19.116 13:24:31 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.116 13:24:31 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:19.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.116 13:24:31 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.116 13:24:31 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:19.116 13:24:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.116 [2024-07-15 13:24:31.551535] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:40:19.116 [2024-07-15 13:24:31.551670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130982 ] 00:40:19.374 [2024-07-15 13:24:31.703037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.374 [2024-07-15 13:24:31.789865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.633 13:24:31 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:19.633 13:24:31 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:19.633 13:24:31 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:19.633 13:24:31 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:19.633 13:24:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.633 [2024-07-15 13:24:31.966810] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.633 null0 00:40:19.633 [2024-07-15 13:24:31.998733] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:19.633 [2024-07-15 13:24:31.998981] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:19.633 [2024-07-15 13:24:32.006705] tcp.c:3751:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:19.633 13:24:32 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.633 [2024-07-15 13:24:32.018710] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:19.633 2024/07/15 13:24:32 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:40:19.633 request: 00:40:19.633 { 00:40:19.633 "method": "nvmf_subsystem_add_listener", 00:40:19.633 "params": { 00:40:19.633 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:19.633 "secure_channel": false, 00:40:19.633 "listen_address": { 00:40:19.633 "trtype": "tcp", 00:40:19.633 "traddr": "127.0.0.1", 00:40:19.633 "trsvcid": "4420" 00:40:19.633 } 00:40:19.633 } 00:40:19.633 } 00:40:19.633 Got JSON-RPC error response 00:40:19.633 GoRPCClient: error on JSON-RPC call 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:19.633 13:24:32 keyring_file -- keyring/file.sh@46 -- # bperfpid=131002 00:40:19.633 13:24:32 keyring_file -- keyring/file.sh@48 -- # waitforlisten 131002 /var/tmp/bperf.sock 00:40:19.633 13:24:32 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 131002 ']' 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:19.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:19.633 13:24:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.891 [2024-07-15 13:24:32.104690] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:40:19.891 [2024-07-15 13:24:32.104854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131002 ] 00:40:19.891 [2024-07-15 13:24:32.247441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.891 [2024-07-15 13:24:32.334370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.827 13:24:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:20.827 13:24:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:20.827 13:24:33 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dDkF3mTluY 00:40:20.827 13:24:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dDkF3mTluY 00:40:21.085 13:24:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4JyVFMGr83 00:40:21.085 13:24:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4JyVFMGr83 00:40:21.343 13:24:33 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:40:21.343 13:24:33 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:40:21.343 13:24:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.343 13:24:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.343 13:24:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.601 13:24:33 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.dDkF3mTluY == \/\t\m\p\/\t\m\p\.\d\D\k\F\3\m\T\l\u\Y ]] 00:40:21.601 13:24:33 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:40:21.601 13:24:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:21.601 13:24:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.601 13:24:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.601 13:24:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:21.859 13:24:34 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.4JyVFMGr83 == \/\t\m\p\/\t\m\p\.\4\J\y\V\F\M\G\r\8\3 ]] 00:40:21.859 13:24:34 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:40:21.859 13:24:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:21.859 13:24:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:21.859 13:24:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.859 13:24:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.859 13:24:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.117 13:24:34 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:40:22.117 13:24:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:40:22.117 13:24:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:22.117 13:24:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.117 13:24:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:22.117 13:24:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.117 13:24:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.376 13:24:34 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:22.376 13:24:34 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:22.376 13:24:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:22.633 [2024-07-15 13:24:35.047612] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:22.891 nvme0n1 00:40:22.891 13:24:35 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:40:22.891 13:24:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.891 13:24:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:22.891 13:24:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:22.891 13:24:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.891 13:24:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:23.149 13:24:35 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:40:23.149 13:24:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:40:23.149 13:24:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:23.149 13:24:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:23.149 13:24:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:23.149 13:24:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:23.149 13:24:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:23.407 13:24:35 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:40:23.407 13:24:35 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:23.407 Running I/O for 1 seconds... 00:40:24.339 00:40:24.339 Latency(us) 00:40:24.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:24.339 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:24.339 nvme0n1 : 1.01 10870.00 42.46 0.00 0.00 11732.25 3932.16 17754.30 00:40:24.339 =================================================================================================================== 00:40:24.339 Total : 10870.00 42.46 0.00 0.00 11732.25 3932.16 17754.30 00:40:24.339 0 00:40:24.339 13:24:36 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:24.339 13:24:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:24.904 13:24:37 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:40:24.904 13:24:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:24.904 13:24:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:24.904 13:24:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:24.904 13:24:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:24.904 13:24:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:25.162 13:24:37 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:40:25.162 13:24:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:40:25.162 13:24:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:25.162 13:24:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.162 13:24:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.162 13:24:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:25.162 13:24:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.419 13:24:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:25.419 13:24:37 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:25.419 13:24:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:25.419 13:24:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:25.419 13:24:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:25.420 13:24:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:25.420 13:24:37 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:25.420 13:24:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:25.420 13:24:37 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:25.420 13:24:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:25.677 [2024-07-15 13:24:37.925312] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:25.677 [2024-07-15 13:24:37.925447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2cf30 (107): Transport endpoint is not connected 00:40:25.677 [2024-07-15 13:24:37.926435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2cf30 (9): Bad file descriptor 00:40:25.677 [2024-07-15 13:24:37.927432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:25.677 [2024-07-15 13:24:37.927459] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:25.677 [2024-07-15 13:24:37.927471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:25.677 2024/07/15 13:24:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:40:25.677 request: 00:40:25.677 { 00:40:25.677 "method": "bdev_nvme_attach_controller", 00:40:25.677 "params": { 00:40:25.677 "name": "nvme0", 00:40:25.677 "trtype": "tcp", 00:40:25.677 "traddr": "127.0.0.1", 00:40:25.677 "adrfam": "ipv4", 00:40:25.677 "trsvcid": "4420", 00:40:25.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:25.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:25.677 "prchk_reftag": false, 00:40:25.677 "prchk_guard": false, 00:40:25.677 "hdgst": false, 00:40:25.677 "ddgst": false, 00:40:25.677 "psk": "key1" 00:40:25.677 } 00:40:25.677 } 00:40:25.677 Got JSON-RPC error response 00:40:25.677 GoRPCClient: error on JSON-RPC call 00:40:25.677 13:24:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:25.677 13:24:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:25.677 13:24:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:25.677 13:24:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:25.677 13:24:37 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:40:25.677 13:24:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:25.677 13:24:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.677 13:24:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:25.677 13:24:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.677 13:24:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.934 13:24:38 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:40:25.934 13:24:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:40:25.934 13:24:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:25.934 13:24:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.934 13:24:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.934 13:24:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:25.934 13:24:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.191 13:24:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:26.192 13:24:38 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:40:26.192 13:24:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:26.449 13:24:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:40:26.449 13:24:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:26.706 13:24:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:40:26.706 13:24:38 keyring_file -- keyring/file.sh@77 -- # jq length 00:40:26.706 13:24:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.964 13:24:39 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:40:26.964 13:24:39 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.dDkF3mTluY 00:40:26.964 13:24:39 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.dDkF3mTluY 00:40:26.964 13:24:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:26.964 13:24:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.dDkF3mTluY 00:40:26.964 13:24:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:26.964 13:24:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:26.964 13:24:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:26.964 13:24:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:26.964 13:24:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dDkF3mTluY 00:40:26.964 13:24:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dDkF3mTluY 00:40:27.222 [2024-07-15 13:24:39.438752] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dDkF3mTluY': 0100660 00:40:27.222 [2024-07-15 13:24:39.438806] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:27.222 2024/07/15 13:24:39 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.dDkF3mTluY], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:40:27.222 request: 00:40:27.222 { 00:40:27.222 "method": "keyring_file_add_key", 00:40:27.222 "params": { 00:40:27.222 "name": "key0", 00:40:27.222 "path": "/tmp/tmp.dDkF3mTluY" 00:40:27.222 } 00:40:27.222 } 00:40:27.222 Got JSON-RPC error response 00:40:27.222 GoRPCClient: error on JSON-RPC call 00:40:27.222 13:24:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:27.222 13:24:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:27.222 13:24:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:27.222 13:24:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:27.222 13:24:39 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.dDkF3mTluY 00:40:27.222 13:24:39 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dDkF3mTluY 00:40:27.222 13:24:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dDkF3mTluY 00:40:27.479 13:24:39 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.dDkF3mTluY 00:40:27.479 13:24:39 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:40:27.479 13:24:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:27.479 13:24:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:27.479 13:24:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:27.479 13:24:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:27.479 13:24:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:27.737 13:24:40 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:40:27.737 13:24:40 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.737 13:24:40 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:27.737 13:24:40 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.737 13:24:40 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:27.737 13:24:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.737 13:24:40 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:27.737 13:24:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.737 13:24:40 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.737 13:24:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.996 [2024-07-15 13:24:40.218920] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.dDkF3mTluY': No such file or directory 00:40:27.996 [2024-07-15 13:24:40.218971] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:27.996 [2024-07-15 13:24:40.218998] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:27.996 [2024-07-15 13:24:40.219009] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:27.996 [2024-07-15 13:24:40.219018] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:27.996 2024/07/15 13:24:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:40:27.996 request: 00:40:27.996 { 00:40:27.996 "method": "bdev_nvme_attach_controller", 00:40:27.996 "params": { 00:40:27.996 "name": "nvme0", 00:40:27.996 "trtype": "tcp", 00:40:27.996 "traddr": "127.0.0.1", 00:40:27.996 "adrfam": "ipv4", 00:40:27.996 "trsvcid": "4420", 00:40:27.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:27.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:27.996 "prchk_reftag": false, 00:40:27.996 "prchk_guard": false, 00:40:27.996 "hdgst": false, 00:40:27.996 "ddgst": false, 00:40:27.996 "psk": "key0" 00:40:27.996 } 00:40:27.996 } 00:40:27.996 Got JSON-RPC error response 00:40:27.996 GoRPCClient: error on JSON-RPC call 00:40:27.996 13:24:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:27.996 13:24:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:27.996 13:24:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:27.996 13:24:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:27.996 13:24:40 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:40:27.996 13:24:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:28.254 13:24:40 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yOtmCPDrOe 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:28.254 13:24:40 keyring_file -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:28.254 13:24:40 keyring_file -- nvmf/common.sh@706 -- # local prefix key digest 00:40:28.254 13:24:40 keyring_file -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:40:28.254 13:24:40 keyring_file -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff 00:40:28.254 13:24:40 keyring_file -- nvmf/common.sh@708 -- # digest=0 00:40:28.254 13:24:40 keyring_file -- nvmf/common.sh@709 -- # python - 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yOtmCPDrOe 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yOtmCPDrOe 00:40:28.254 13:24:40 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.yOtmCPDrOe 00:40:28.254 13:24:40 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yOtmCPDrOe 00:40:28.254 13:24:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yOtmCPDrOe 00:40:28.512 13:24:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:28.512 13:24:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:28.770 nvme0n1 00:40:28.770 13:24:41 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:40:28.770 13:24:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:28.770 13:24:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:28.770 13:24:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:28.770 13:24:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.770 13:24:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:29.028 13:24:41 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:40:29.028 13:24:41 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:40:29.028 13:24:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:29.286 13:24:41 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:40:29.286 13:24:41 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:40:29.286 13:24:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.286 13:24:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.286 13:24:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:29.544 13:24:42 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:40:29.544 13:24:42 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:40:29.544 13:24:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:29.544 13:24:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:29.544 13:24:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:29.544 13:24:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.544 13:24:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:29.803 13:24:42 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:40:29.803 13:24:42 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:29.803 13:24:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:30.368 13:24:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:40:30.368 13:24:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:30.368 13:24:42 keyring_file -- keyring/file.sh@104 -- # jq length 00:40:30.368 13:24:42 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:40:30.368 13:24:42 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yOtmCPDrOe 00:40:30.368 13:24:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yOtmCPDrOe 00:40:30.626 13:24:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4JyVFMGr83 00:40:30.626 13:24:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4JyVFMGr83 00:40:30.884 13:24:43 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:30.884 13:24:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:31.451 nvme0n1 00:40:31.451 13:24:43 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:40:31.451 13:24:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:31.710 13:24:44 keyring_file -- keyring/file.sh@112 -- # config='{ 00:40:31.710 "subsystems": [ 00:40:31.710 { 00:40:31.710 "subsystem": "keyring", 00:40:31.710 "config": [ 00:40:31.710 { 00:40:31.710 "method": "keyring_file_add_key", 00:40:31.710 "params": { 00:40:31.710 "name": "key0", 00:40:31.710 "path": "/tmp/tmp.yOtmCPDrOe" 00:40:31.710 } 00:40:31.710 }, 00:40:31.710 { 00:40:31.710 "method": "keyring_file_add_key", 00:40:31.710 "params": { 00:40:31.710 "name": "key1", 00:40:31.710 "path": "/tmp/tmp.4JyVFMGr83" 00:40:31.710 } 00:40:31.710 } 00:40:31.710 ] 00:40:31.710 }, 00:40:31.710 { 00:40:31.710 "subsystem": "iobuf", 00:40:31.710 "config": [ 00:40:31.710 { 00:40:31.710 "method": "iobuf_set_options", 00:40:31.710 "params": { 00:40:31.710 "large_bufsize": 135168, 00:40:31.710 "large_pool_count": 1024, 00:40:31.710 "small_bufsize": 8192, 00:40:31.710 "small_pool_count": 8192 00:40:31.710 } 00:40:31.710 } 00:40:31.710 ] 00:40:31.710 }, 00:40:31.710 { 00:40:31.710 "subsystem": "sock", 00:40:31.710 "config": [ 00:40:31.710 { 00:40:31.710 "method": "sock_set_default_impl", 00:40:31.710 "params": { 00:40:31.710 "impl_name": "posix" 00:40:31.710 } 00:40:31.710 }, 00:40:31.710 { 00:40:31.710 "method": "sock_impl_set_options", 00:40:31.710 "params": { 00:40:31.710 "enable_ktls": false, 00:40:31.710 "enable_placement_id": 0, 00:40:31.710 "enable_quickack": false, 00:40:31.710 "enable_recv_pipe": true, 00:40:31.710 "enable_zerocopy_send_client": false, 00:40:31.710 "enable_zerocopy_send_server": true, 00:40:31.710 "impl_name": "ssl", 00:40:31.710 "recv_buf_size": 4096, 00:40:31.710 "send_buf_size": 4096, 00:40:31.710 "tls_version": 0, 00:40:31.710 "zerocopy_threshold": 0 00:40:31.710 } 00:40:31.710 }, 00:40:31.710 { 00:40:31.710 "method": "sock_impl_set_options", 00:40:31.710 "params": { 00:40:31.710 "enable_ktls": false, 00:40:31.710 "enable_placement_id": 0, 00:40:31.710 "enable_quickack": false, 00:40:31.710 "enable_recv_pipe": true, 00:40:31.710 "enable_zerocopy_send_client": false, 00:40:31.710 "enable_zerocopy_send_server": true, 00:40:31.710 "impl_name": "posix", 00:40:31.710 "recv_buf_size": 2097152, 00:40:31.710 "send_buf_size": 2097152, 00:40:31.710 "tls_version": 0, 00:40:31.710 "zerocopy_threshold": 0 00:40:31.710 } 00:40:31.710 } 00:40:31.710 ] 00:40:31.710 }, 00:40:31.710 { 00:40:31.710 "subsystem": "vmd", 00:40:31.710 "config": [] 00:40:31.711 }, 00:40:31.711 { 00:40:31.711 "subsystem": "accel", 00:40:31.711 "config": [ 00:40:31.711 { 00:40:31.711 "method": "accel_set_options", 00:40:31.711 "params": { 00:40:31.711 "buf_count": 2048, 00:40:31.711 "large_cache_size": 16, 00:40:31.711 "sequence_count": 2048, 00:40:31.711 "small_cache_size": 128, 00:40:31.711 "task_count": 2048 00:40:31.711 } 00:40:31.711 } 00:40:31.711 ] 00:40:31.711 }, 00:40:31.711 { 00:40:31.711 "subsystem": "bdev", 00:40:31.711 "config": [ 00:40:31.711 { 00:40:31.711 "method": "bdev_set_options", 00:40:31.711 "params": { 00:40:31.711 "bdev_auto_examine": true, 00:40:31.711 "bdev_io_cache_size": 256, 00:40:31.711 "bdev_io_pool_size": 65535, 00:40:31.711 "iobuf_large_cache_size": 16, 00:40:31.711 "iobuf_small_cache_size": 128 00:40:31.711 } 00:40:31.711 }, 00:40:31.711 { 00:40:31.711 "method": "bdev_raid_set_options", 00:40:31.711 "params": { 00:40:31.711 "process_window_size_kb": 1024 00:40:31.711 } 00:40:31.711 }, 00:40:31.711 { 00:40:31.711 "method": "bdev_iscsi_set_options", 00:40:31.711 "params": { 00:40:31.711 "timeout_sec": 30 00:40:31.711 } 00:40:31.711 }, 00:40:31.711 { 00:40:31.711 "method": "bdev_nvme_set_options", 00:40:31.711 "params": { 00:40:31.711 "action_on_timeout": "none", 00:40:31.711 "allow_accel_sequence": false, 00:40:31.711 "arbitration_burst": 0, 00:40:31.711 "bdev_retry_count": 3, 00:40:31.711 "ctrlr_loss_timeout_sec": 0, 00:40:31.711 "delay_cmd_submit": true, 00:40:31.711 "dhchap_dhgroups": [ 00:40:31.711 "null", 00:40:31.711 "ffdhe2048", 00:40:31.711 "ffdhe3072", 00:40:31.711 "ffdhe4096", 00:40:31.711 "ffdhe6144", 00:40:31.711 "ffdhe8192" 00:40:31.711 ], 00:40:31.711 "dhchap_digests": [ 00:40:31.711 "sha256", 00:40:31.711 "sha384", 00:40:31.711 "sha512" 00:40:31.711 ], 00:40:31.711 "disable_auto_failback": false, 00:40:31.711 "fast_io_fail_timeout_sec": 0, 00:40:31.711 "generate_uuids": false, 00:40:31.711 "high_priority_weight": 0, 00:40:31.711 "io_path_stat": false, 00:40:31.711 "io_queue_requests": 512, 00:40:31.711 "keep_alive_timeout_ms": 10000, 00:40:31.711 "low_priority_weight": 0, 00:40:31.711 "medium_priority_weight": 0, 00:40:31.711 "nvme_adminq_poll_period_us": 10000, 00:40:31.711 "nvme_error_stat": false, 00:40:31.711 "nvme_ioq_poll_period_us": 0, 00:40:31.711 "rdma_cm_event_timeout_ms": 0, 00:40:31.711 "rdma_max_cq_size": 0, 00:40:31.711 "rdma_srq_size": 0, 00:40:31.711 "reconnect_delay_sec": 0, 00:40:31.711 "timeout_admin_us": 0, 00:40:31.711 "timeout_us": 0, 00:40:31.711 "transport_ack_timeout": 0, 00:40:31.711 "transport_retry_count": 4, 00:40:31.711 "transport_tos": 0 00:40:31.711 } 00:40:31.711 }, 00:40:31.711 { 00:40:31.711 "method": "bdev_nvme_attach_controller", 00:40:31.711 "params": { 00:40:31.711 "adrfam": "IPv4", 00:40:31.711 "ctrlr_loss_timeout_sec": 0, 00:40:31.711 "ddgst": false, 00:40:31.711 "fast_io_fail_timeout_sec": 0, 00:40:31.711 "hdgst": false, 00:40:31.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:31.711 "name": "nvme0", 00:40:31.711 "prchk_guard": false, 00:40:31.711 "prchk_reftag": false, 00:40:31.711 "psk": "key0", 00:40:31.711 "reconnect_delay_sec": 0, 00:40:31.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:31.711 "traddr": "127.0.0.1", 00:40:31.711 "trsvcid": "4420", 00:40:31.711 "trtype": "TCP" 00:40:31.711 } 00:40:31.711 }, 00:40:31.711 { 00:40:31.711 "method": "bdev_nvme_set_hotplug", 00:40:31.711 "params": { 00:40:31.711 "enable": false, 00:40:31.711 "period_us": 100000 00:40:31.711 } 00:40:31.711 }, 00:40:31.711 { 00:40:31.711 "method": "bdev_wait_for_examine" 00:40:31.711 } 00:40:31.711 ] 00:40:31.711 }, 00:40:31.711 { 00:40:31.711 "subsystem": "nbd", 00:40:31.711 "config": [] 00:40:31.711 } 00:40:31.711 ] 00:40:31.711 }' 00:40:31.711 13:24:44 keyring_file -- keyring/file.sh@114 -- # killprocess 131002 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 131002 ']' 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@952 -- # kill -0 131002 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131002 00:40:31.711 killing process with pid 131002 00:40:31.711 Received shutdown signal, test time was about 1.000000 seconds 00:40:31.711 00:40:31.711 Latency(us) 00:40:31.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.711 =================================================================================================================== 00:40:31.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131002' 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@967 -- # kill 131002 00:40:31.711 13:24:44 keyring_file -- common/autotest_common.sh@972 -- # wait 131002 00:40:31.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:31.970 13:24:44 keyring_file -- keyring/file.sh@117 -- # bperfpid=131469 00:40:31.970 13:24:44 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:31.970 13:24:44 keyring_file -- keyring/file.sh@119 -- # waitforlisten 131469 /var/tmp/bperf.sock 00:40:31.970 13:24:44 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 131469 ']' 00:40:31.970 13:24:44 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:31.970 13:24:44 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:31.970 13:24:44 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:31.970 13:24:44 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:31.970 13:24:44 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:40:31.970 "subsystems": [ 00:40:31.970 { 00:40:31.970 "subsystem": "keyring", 00:40:31.970 "config": [ 00:40:31.970 { 00:40:31.970 "method": "keyring_file_add_key", 00:40:31.970 "params": { 00:40:31.970 "name": "key0", 00:40:31.970 "path": "/tmp/tmp.yOtmCPDrOe" 00:40:31.970 } 00:40:31.970 }, 00:40:31.970 { 00:40:31.970 "method": "keyring_file_add_key", 00:40:31.970 "params": { 00:40:31.970 "name": "key1", 00:40:31.970 "path": "/tmp/tmp.4JyVFMGr83" 00:40:31.970 } 00:40:31.970 } 00:40:31.970 ] 00:40:31.970 }, 00:40:31.970 { 00:40:31.970 "subsystem": "iobuf", 00:40:31.970 "config": [ 00:40:31.970 { 00:40:31.970 "method": "iobuf_set_options", 00:40:31.970 "params": { 00:40:31.970 "large_bufsize": 135168, 00:40:31.970 "large_pool_count": 1024, 00:40:31.970 "small_bufsize": 8192, 00:40:31.970 "small_pool_count": 8192 00:40:31.970 } 00:40:31.970 } 00:40:31.970 ] 00:40:31.970 }, 00:40:31.970 { 00:40:31.970 "subsystem": "sock", 00:40:31.970 "config": [ 00:40:31.970 { 00:40:31.970 "method": "sock_set_default_impl", 00:40:31.970 "params": { 00:40:31.970 "impl_name": "posix" 00:40:31.970 } 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "method": "sock_impl_set_options", 00:40:31.971 "params": { 00:40:31.971 "enable_ktls": false, 00:40:31.971 "enable_placement_id": 0, 00:40:31.971 "enable_quickack": false, 00:40:31.971 "enable_recv_pipe": true, 00:40:31.971 "enable_zerocopy_send_client": false, 00:40:31.971 "enable_zerocopy_send_server": true, 00:40:31.971 "impl_name": "ssl", 00:40:31.971 "recv_buf_size": 4096, 00:40:31.971 "send_buf_size": 4096, 00:40:31.971 "tls_version": 0, 00:40:31.971 "zerocopy_threshold": 0 00:40:31.971 } 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "method": "sock_impl_set_options", 00:40:31.971 "params": { 00:40:31.971 "enable_ktls": false, 00:40:31.971 "enable_placement_id": 0, 00:40:31.971 "enable_quickack": false, 00:40:31.971 "enable_recv_pipe": true, 00:40:31.971 "enable_zerocopy_send_client": false, 00:40:31.971 "enable_zerocopy_send_server": true, 00:40:31.971 "impl_name": "posix", 00:40:31.971 "recv_buf_size": 2097152, 00:40:31.971 "send_buf_size": 2097152, 00:40:31.971 "tls_version": 0, 00:40:31.971 "zerocopy_threshold": 0 00:40:31.971 } 00:40:31.971 } 00:40:31.971 ] 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "subsystem": "vmd", 00:40:31.971 "config": [] 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "subsystem": "accel", 00:40:31.971 "config": [ 00:40:31.971 { 00:40:31.971 "method": "accel_set_options", 00:40:31.971 "params": { 00:40:31.971 "buf_count": 2048, 00:40:31.971 "large_cache_size": 16, 00:40:31.971 "sequence_count": 2048, 00:40:31.971 "small_cache_size": 128, 00:40:31.971 "task_count": 2048 00:40:31.971 } 00:40:31.971 } 00:40:31.971 ] 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "subsystem": "bdev", 00:40:31.971 "config": [ 00:40:31.971 { 00:40:31.971 "method": "bdev_set_options", 00:40:31.971 "params": { 00:40:31.971 "bdev_auto_examine": true, 00:40:31.971 "bdev_io_cache_size": 256, 00:40:31.971 "bdev_io_pool_size": 65535, 00:40:31.971 "iobuf_large_cache_size": 16, 00:40:31.971 "iobuf_small_cache_size": 128 00:40:31.971 } 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "method": "bdev_raid_set_options", 00:40:31.971 "params": { 00:40:31.971 "process_window_size_kb": 1024 00:40:31.971 } 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "method": "bdev_iscsi_set_options", 00:40:31.971 "params": { 00:40:31.971 "timeout_sec": 30 00:40:31.971 } 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "method": "bdev_nvme_set_options", 00:40:31.971 "params": { 00:40:31.971 "action_on_timeout": "none", 00:40:31.971 "allow_accel_sequence": false, 00:40:31.971 "arbitration_burst": 0, 00:40:31.971 "bdev_retry_count": 3, 00:40:31.971 "ctrlr_loss_timeout_sec": 0, 00:40:31.971 "delay_cmd_submit": true, 00:40:31.971 "dhchap_dhgroups": [ 00:40:31.971 "null", 00:40:31.971 "ffdhe2048", 00:40:31.971 "ffdhe3072", 00:40:31.971 "ffdhe4096", 00:40:31.971 "ffdhe6144", 00:40:31.971 "ffdhe8192" 00:40:31.971 ], 00:40:31.971 "dhchap_digests": [ 00:40:31.971 "sha256", 00:40:31.971 "sha384", 00:40:31.971 "sha512" 00:40:31.971 ], 00:40:31.971 "disable_auto_failback": false, 00:40:31.971 "fast_io_fail_timeout_sec": 0, 00:40:31.971 "generate_uuids": false, 00:40:31.971 "high_priority_weight": 0, 00:40:31.971 "io_path_stat": false, 00:40:31.971 "io_queue_requests": 512, 00:40:31.971 "keep_alive_timeout_ms": 10000, 00:40:31.971 "low_priority_weight": 0, 00:40:31.971 "medium_priority_weight": 0, 00:40:31.971 "nvme_adminq_poll_period_us": 10000, 00:40:31.971 "nvme_error_stat": false, 00:40:31.971 "nvme_ioq_poll_period_us": 0, 00:40:31.971 "rdma_cm_event_timeout_ms": 0, 00:40:31.971 "rdma_max_cq_size": 0, 00:40:31.971 "rdma_srq_size": 0, 00:40:31.971 "reconnect_delay_sec": 0, 00:40:31.971 "timeout_admin_us": 0, 00:40:31.971 "timeout_us": 0, 00:40:31.971 "transport_ack_timeout": 0, 00:40:31.971 "transport_retry_count": 4, 00:40:31.971 "transport_tos": 0 00:40:31.971 } 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "method": "bdev_nvme_attach_controller", 00:40:31.971 "params": { 00:40:31.971 "adrfam": "IPv4", 00:40:31.971 "ctrlr_loss_timeout_sec": 0, 00:40:31.971 "ddgst": false, 00:40:31.971 "fast_io_fail_timeout_sec": 0, 00:40:31.971 "hdgst": false, 00:40:31.971 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:31.971 "name": "nvme0", 00:40:31.971 "prchk_guard": false, 00:40:31.971 "prchk_reftag": false, 00:40:31.971 "psk": "key0", 00:40:31.971 "reconnect_delay_sec": 0, 00:40:31.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:31.971 "traddr": "127.0.0.1", 00:40:31.971 "trsvcid": "4420", 00:40:31.971 "trtype": "TCP" 00:40:31.971 } 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "method": "bdev_nvme_set_hotplug", 00:40:31.971 "params": { 00:40:31.971 "enable": false, 00:40:31.971 "period_us": 100000 00:40:31.971 } 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "method": "bdev_wait_for_examine" 00:40:31.971 } 00:40:31.971 ] 00:40:31.971 }, 00:40:31.971 { 00:40:31.971 "subsystem": "nbd", 00:40:31.971 "config": [] 00:40:31.971 } 00:40:31.971 ] 00:40:31.971 }' 00:40:31.971 13:24:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:31.971 [2024-07-15 13:24:44.270650] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:40:31.971 [2024-07-15 13:24:44.270899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131469 ] 00:40:31.971 [2024-07-15 13:24:44.403221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.229 [2024-07-15 13:24:44.463110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:32.229 [2024-07-15 13:24:44.605758] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:33.164 13:24:45 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:33.164 13:24:45 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:33.164 13:24:45 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:40:33.164 13:24:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.164 13:24:45 keyring_file -- keyring/file.sh@120 -- # jq length 00:40:33.164 13:24:45 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:40:33.164 13:24:45 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:40:33.164 13:24:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.164 13:24:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:33.164 13:24:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.164 13:24:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.164 13:24:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:33.439 13:24:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:33.439 13:24:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:40:33.439 13:24:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.439 13:24:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:33.439 13:24:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.439 13:24:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.439 13:24:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:33.703 13:24:46 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:40:33.961 13:24:46 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:40:33.961 13:24:46 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:40:33.961 13:24:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:34.219 13:24:46 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:40:34.219 13:24:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:34.219 13:24:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.yOtmCPDrOe /tmp/tmp.4JyVFMGr83 00:40:34.219 13:24:46 keyring_file -- keyring/file.sh@20 -- # killprocess 131469 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 131469 ']' 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 131469 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131469 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:34.219 killing process with pid 131469 00:40:34.219 Received shutdown signal, test time was about 1.000000 seconds 00:40:34.219 00:40:34.219 Latency(us) 00:40:34.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.219 =================================================================================================================== 00:40:34.219 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131469' 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@967 -- # kill 131469 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@972 -- # wait 131469 00:40:34.219 13:24:46 keyring_file -- keyring/file.sh@21 -- # killprocess 130982 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 130982 ']' 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 130982 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130982 00:40:34.219 killing process with pid 130982 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130982' 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@967 -- # kill 130982 00:40:34.219 [2024-07-15 13:24:46.678746] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:40:34.219 13:24:46 keyring_file -- common/autotest_common.sh@972 -- # wait 130982 00:40:34.477 ************************************ 00:40:34.477 END TEST keyring_file 00:40:34.477 ************************************ 00:40:34.477 00:40:34.477 real 0m15.653s 00:40:34.477 user 0m40.359s 00:40:34.477 sys 0m3.050s 00:40:34.477 13:24:46 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:34.477 13:24:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:34.734 13:24:46 -- common/autotest_common.sh@1142 -- # return 0 00:40:34.734 13:24:46 -- spdk/autotest.sh@299 -- # [[ y == y ]] 00:40:34.734 13:24:46 -- spdk/autotest.sh@300 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:40:34.734 13:24:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:34.734 13:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:34.734 13:24:46 -- common/autotest_common.sh@10 -- # set +x 00:40:34.734 ************************************ 00:40:34.734 START TEST keyring_linux 00:40:34.734 ************************************ 00:40:34.734 13:24:46 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:40:34.734 * Looking for test storage... 00:40:34.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:40:34.734 13:24:47 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:40:34.734 13:24:47 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=2cb7827d-6d51-45f2-a3fc-4dedac25353a 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.734 13:24:47 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:34.734 13:24:47 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.734 13:24:47 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.734 13:24:47 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.735 13:24:47 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.735 13:24:47 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.735 13:24:47 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.735 13:24:47 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:34.735 13:24:47 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:34.735 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@706 -- # local prefix key digest 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@708 -- # digest=0 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@709 -- # python - 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:34.735 /tmp/:spdk-test:key0 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@706 -- # local prefix key digest 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@708 -- # key=112233445566778899aabbccddeeff00 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@708 -- # digest=0 00:40:34.735 13:24:47 keyring_linux -- nvmf/common.sh@709 -- # python - 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:34.735 /tmp/:spdk-test:key1 00:40:34.735 13:24:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=131615 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:34.735 13:24:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 131615 00:40:34.735 13:24:47 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 131615 ']' 00:40:34.735 13:24:47 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:34.735 13:24:47 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:34.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:34.735 13:24:47 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:34.735 13:24:47 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:34.735 13:24:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:34.993 [2024-07-15 13:24:47.236726] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:40:34.993 [2024-07-15 13:24:47.236835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131615 ] 00:40:34.993 [2024-07-15 13:24:47.373086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.993 [2024-07-15 13:24:47.432231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:35.251 13:24:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:35.251 [2024-07-15 13:24:47.595588] tcp.c: 709:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:35.251 null0 00:40:35.251 [2024-07-15 13:24:47.627481] tcp.c: 988:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:35.251 [2024-07-15 13:24:47.627705] tcp.c:1038:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:35.251 13:24:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:35.251 384199096 00:40:35.251 13:24:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:35.251 209778322 00:40:35.251 13:24:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=131635 00:40:35.251 13:24:47 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:35.251 13:24:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 131635 /var/tmp/bperf.sock 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 131635 ']' 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:35.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:35.251 13:24:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:35.509 [2024-07-15 13:24:47.720423] Starting SPDK v24.09-pre git sha1 a62e924c8 / DPDK 24.03.0 initialization... 00:40:35.509 [2024-07-15 13:24:47.720548] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131635 ] 00:40:35.509 [2024-07-15 13:24:47.865227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.509 [2024-07-15 13:24:47.924427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.442 13:24:48 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:36.442 13:24:48 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:36.442 13:24:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:36.442 13:24:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:36.700 13:24:49 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:36.700 13:24:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:36.958 13:24:49 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:36.958 13:24:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:37.215 [2024-07-15 13:24:49.608191] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:37.474 nvme0n1 00:40:37.474 13:24:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:37.474 13:24:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:37.474 13:24:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:37.474 13:24:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:37.474 13:24:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.474 13:24:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:37.732 13:24:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:37.732 13:24:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:37.732 13:24:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:37.732 13:24:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:37.732 13:24:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.732 13:24:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.732 13:24:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:37.989 13:24:50 keyring_linux -- keyring/linux.sh@25 -- # sn=384199096 00:40:37.989 13:24:50 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:37.989 13:24:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:37.989 13:24:50 keyring_linux -- keyring/linux.sh@26 -- # [[ 384199096 == \3\8\4\1\9\9\0\9\6 ]] 00:40:37.989 13:24:50 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 384199096 00:40:37.989 13:24:50 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:37.989 13:24:50 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:37.989 Running I/O for 1 seconds... 00:40:38.955 00:40:38.955 Latency(us) 00:40:38.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:38.955 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:38.955 nvme0n1 : 1.01 10973.91 42.87 0.00 0.00 11596.98 6523.81 17635.14 00:40:38.955 =================================================================================================================== 00:40:38.955 Total : 10973.91 42.87 0.00 0.00 11596.98 6523.81 17635.14 00:40:38.955 0 00:40:38.955 13:24:51 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:38.955 13:24:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:39.213 13:24:51 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:39.213 13:24:51 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:39.213 13:24:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:39.213 13:24:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:39.213 13:24:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:39.213 13:24:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:39.840 13:24:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:39.840 13:24:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:39.840 13:24:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:39.840 13:24:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:39.840 13:24:51 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:40:39.840 13:24:51 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:39.840 13:24:51 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:39.840 13:24:51 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:39.840 13:24:51 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:39.840 13:24:51 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:39.840 13:24:51 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:39.840 13:24:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:39.840 [2024-07-15 13:24:52.243614] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:39.840 [2024-07-15 13:24:52.244028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dcea0 (107): Transport endpoint is not connected 00:40:39.840 [2024-07-15 13:24:52.245016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dcea0 (9): Bad file descriptor 00:40:39.840 [2024-07-15 13:24:52.246008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:39.840 [2024-07-15 13:24:52.246036] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:39.840 [2024-07-15 13:24:52.246047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:39.840 2024/07/15 13:24:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:40:39.840 request: 00:40:39.840 { 00:40:39.840 "method": "bdev_nvme_attach_controller", 00:40:39.840 "params": { 00:40:39.840 "name": "nvme0", 00:40:39.840 "trtype": "tcp", 00:40:39.840 "traddr": "127.0.0.1", 00:40:39.840 "adrfam": "ipv4", 00:40:39.840 "trsvcid": "4420", 00:40:39.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:39.840 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:39.840 "prchk_reftag": false, 00:40:39.840 "prchk_guard": false, 00:40:39.840 "hdgst": false, 00:40:39.840 "ddgst": false, 00:40:39.840 "psk": ":spdk-test:key1" 00:40:39.840 } 00:40:39.840 } 00:40:39.840 Got JSON-RPC error response 00:40:39.840 GoRPCClient: error on JSON-RPC call 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@33 -- # sn=384199096 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 384199096 00:40:40.120 1 links removed 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@33 -- # sn=209778322 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 209778322 00:40:40.120 1 links removed 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@41 -- # killprocess 131635 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 131635 ']' 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 131635 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131635 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:40.120 killing process with pid 131635 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131635' 00:40:40.120 Received shutdown signal, test time was about 1.000000 seconds 00:40:40.120 00:40:40.120 Latency(us) 00:40:40.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:40.120 =================================================================================================================== 00:40:40.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@967 -- # kill 131635 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@972 -- # wait 131635 00:40:40.120 13:24:52 keyring_linux -- keyring/linux.sh@42 -- # killprocess 131615 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 131615 ']' 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 131615 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131615 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:40.120 killing process with pid 131615 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131615' 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@967 -- # kill 131615 00:40:40.120 13:24:52 keyring_linux -- common/autotest_common.sh@972 -- # wait 131615 00:40:40.445 00:40:40.445 real 0m5.766s 00:40:40.445 user 0m12.044s 00:40:40.445 sys 0m1.424s 00:40:40.445 13:24:52 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:40.445 13:24:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:40.445 ************************************ 00:40:40.445 END TEST keyring_linux 00:40:40.445 ************************************ 00:40:40.445 13:24:52 -- common/autotest_common.sh@1142 -- # return 0 00:40:40.445 13:24:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:40.445 13:24:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:40.445 13:24:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:40.445 13:24:52 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:40.445 13:24:52 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:40:40.445 13:24:52 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:40:40.445 13:24:52 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:40:40.445 13:24:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:40.445 13:24:52 -- common/autotest_common.sh@10 -- # set +x 00:40:40.445 13:24:52 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:40:40.445 13:24:52 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:40.445 13:24:52 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:40.445 13:24:52 -- common/autotest_common.sh@10 -- # set +x 00:40:41.870 INFO: APP EXITING 00:40:41.870 INFO: killing all VMs 00:40:41.870 INFO: killing vhost app 00:40:41.870 INFO: EXIT DONE 00:40:42.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:42.435 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:40:42.435 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:40:43.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:43.285 Cleaning 00:40:43.285 Removing: /var/run/dpdk/spdk0/config 00:40:43.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:43.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:43.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:43.285 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:43.285 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:43.285 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:43.285 Removing: /var/run/dpdk/spdk1/config 00:40:43.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:43.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:43.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:43.285 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:43.285 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:43.285 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:43.285 Removing: /var/run/dpdk/spdk2/config 00:40:43.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:43.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:43.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:43.285 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:43.285 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:43.285 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:43.285 Removing: /var/run/dpdk/spdk3/config 00:40:43.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:43.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:43.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:43.285 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:43.285 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:43.285 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:43.285 Removing: /var/run/dpdk/spdk4/config 00:40:43.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:43.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:43.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:43.285 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:43.285 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:43.285 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:43.285 Removing: /dev/shm/nvmf_trace.0 00:40:43.285 Removing: /dev/shm/spdk_tgt_trace.pid60760 00:40:43.285 Removing: /var/run/dpdk/spdk0 00:40:43.285 Removing: /var/run/dpdk/spdk1 00:40:43.285 Removing: /var/run/dpdk/spdk2 00:40:43.285 Removing: /var/run/dpdk/spdk3 00:40:43.285 Removing: /var/run/dpdk/spdk4 00:40:43.285 Removing: /var/run/dpdk/spdk_pid100054 00:40:43.285 Removing: /var/run/dpdk/spdk_pid100371 00:40:43.285 Removing: /var/run/dpdk/spdk_pid102693 00:40:43.285 Removing: /var/run/dpdk/spdk_pid103044 00:40:43.285 Removing: /var/run/dpdk/spdk_pid103296 00:40:43.285 Removing: /var/run/dpdk/spdk_pid103338 00:40:43.285 Removing: /var/run/dpdk/spdk_pid103938 00:40:43.285 Removing: /var/run/dpdk/spdk_pid104362 00:40:43.285 Removing: /var/run/dpdk/spdk_pid104784 00:40:43.285 Removing: /var/run/dpdk/spdk_pid104826 00:40:43.285 Removing: /var/run/dpdk/spdk_pid105180 00:40:43.285 Removing: /var/run/dpdk/spdk_pid105683 00:40:43.285 Removing: /var/run/dpdk/spdk_pid106132 00:40:43.285 Removing: /var/run/dpdk/spdk_pid107074 00:40:43.285 Removing: /var/run/dpdk/spdk_pid107987 00:40:43.285 Removing: /var/run/dpdk/spdk_pid108098 00:40:43.285 Removing: /var/run/dpdk/spdk_pid108161 00:40:43.285 Removing: /var/run/dpdk/spdk_pid109604 00:40:43.285 Removing: /var/run/dpdk/spdk_pid109827 00:40:43.285 Removing: /var/run/dpdk/spdk_pid115166 00:40:43.285 Removing: /var/run/dpdk/spdk_pid115597 00:40:43.285 Removing: /var/run/dpdk/spdk_pid115695 00:40:43.285 Removing: /var/run/dpdk/spdk_pid115842 00:40:43.285 Removing: /var/run/dpdk/spdk_pid115888 00:40:43.285 Removing: /var/run/dpdk/spdk_pid115928 00:40:43.285 Removing: /var/run/dpdk/spdk_pid115975 00:40:43.285 Removing: /var/run/dpdk/spdk_pid116129 00:40:43.285 Removing: /var/run/dpdk/spdk_pid116272 00:40:43.285 Removing: /var/run/dpdk/spdk_pid116505 00:40:43.285 Removing: /var/run/dpdk/spdk_pid116622 00:40:43.285 Removing: /var/run/dpdk/spdk_pid116861 00:40:43.285 Removing: /var/run/dpdk/spdk_pid116968 00:40:43.285 Removing: /var/run/dpdk/spdk_pid117102 00:40:43.285 Removing: /var/run/dpdk/spdk_pid117436 00:40:43.285 Removing: /var/run/dpdk/spdk_pid117847 00:40:43.285 Removing: /var/run/dpdk/spdk_pid118134 00:40:43.285 Removing: /var/run/dpdk/spdk_pid118606 00:40:43.285 Removing: /var/run/dpdk/spdk_pid118614 00:40:43.285 Removing: /var/run/dpdk/spdk_pid118950 00:40:43.285 Removing: /var/run/dpdk/spdk_pid118964 00:40:43.285 Removing: /var/run/dpdk/spdk_pid118984 00:40:43.285 Removing: /var/run/dpdk/spdk_pid119010 00:40:43.285 Removing: /var/run/dpdk/spdk_pid119021 00:40:43.285 Removing: /var/run/dpdk/spdk_pid119369 00:40:43.285 Removing: /var/run/dpdk/spdk_pid119417 00:40:43.285 Removing: /var/run/dpdk/spdk_pid119736 00:40:43.285 Removing: /var/run/dpdk/spdk_pid119966 00:40:43.285 Removing: /var/run/dpdk/spdk_pid120432 00:40:43.285 Removing: /var/run/dpdk/spdk_pid121003 00:40:43.285 Removing: /var/run/dpdk/spdk_pid122330 00:40:43.542 Removing: /var/run/dpdk/spdk_pid124394 00:40:43.542 Removing: /var/run/dpdk/spdk_pid124485 00:40:43.542 Removing: /var/run/dpdk/spdk_pid124562 00:40:43.542 Removing: /var/run/dpdk/spdk_pid124646 00:40:43.542 Removing: /var/run/dpdk/spdk_pid124771 00:40:43.542 Removing: /var/run/dpdk/spdk_pid124845 00:40:43.542 Removing: /var/run/dpdk/spdk_pid124922 00:40:43.542 Removing: /var/run/dpdk/spdk_pid125007 00:40:43.542 Removing: /var/run/dpdk/spdk_pid125341 00:40:43.542 Removing: /var/run/dpdk/spdk_pid125990 00:40:43.542 Removing: /var/run/dpdk/spdk_pid127291 00:40:43.542 Removing: /var/run/dpdk/spdk_pid127473 00:40:43.542 Removing: /var/run/dpdk/spdk_pid127731 00:40:43.542 Removing: /var/run/dpdk/spdk_pid128007 00:40:43.542 Removing: /var/run/dpdk/spdk_pid128553 00:40:43.542 Removing: /var/run/dpdk/spdk_pid128558 00:40:43.542 Removing: /var/run/dpdk/spdk_pid128902 00:40:43.542 Removing: /var/run/dpdk/spdk_pid129051 00:40:43.542 Removing: /var/run/dpdk/spdk_pid129193 00:40:43.542 Removing: /var/run/dpdk/spdk_pid129281 00:40:43.542 Removing: /var/run/dpdk/spdk_pid129425 00:40:43.542 Removing: /var/run/dpdk/spdk_pid129529 00:40:43.542 Removing: /var/run/dpdk/spdk_pid130186 00:40:43.542 Removing: /var/run/dpdk/spdk_pid130216 00:40:43.542 Removing: /var/run/dpdk/spdk_pid130252 00:40:43.542 Removing: /var/run/dpdk/spdk_pid130500 00:40:43.542 Removing: /var/run/dpdk/spdk_pid130530 00:40:43.542 Removing: /var/run/dpdk/spdk_pid130560 00:40:43.542 Removing: /var/run/dpdk/spdk_pid130982 00:40:43.542 Removing: /var/run/dpdk/spdk_pid131002 00:40:43.542 Removing: /var/run/dpdk/spdk_pid131469 00:40:43.542 Removing: /var/run/dpdk/spdk_pid131615 00:40:43.542 Removing: /var/run/dpdk/spdk_pid131635 00:40:43.542 Removing: /var/run/dpdk/spdk_pid60619 00:40:43.542 Removing: /var/run/dpdk/spdk_pid60760 00:40:43.542 Removing: /var/run/dpdk/spdk_pid61021 00:40:43.542 Removing: /var/run/dpdk/spdk_pid61108 00:40:43.542 Removing: /var/run/dpdk/spdk_pid61153 00:40:43.542 Removing: /var/run/dpdk/spdk_pid61257 00:40:43.542 Removing: /var/run/dpdk/spdk_pid61287 00:40:43.542 Removing: /var/run/dpdk/spdk_pid61405 00:40:43.542 Removing: /var/run/dpdk/spdk_pid61685 00:40:43.542 Removing: /var/run/dpdk/spdk_pid61856 00:40:43.542 Removing: /var/run/dpdk/spdk_pid61932 00:40:43.542 Removing: /var/run/dpdk/spdk_pid62011 00:40:43.542 Removing: /var/run/dpdk/spdk_pid62100 00:40:43.542 Removing: /var/run/dpdk/spdk_pid62133 00:40:43.542 Removing: /var/run/dpdk/spdk_pid62169 00:40:43.542 Removing: /var/run/dpdk/spdk_pid62230 00:40:43.542 Removing: /var/run/dpdk/spdk_pid62331 00:40:43.542 Removing: /var/run/dpdk/spdk_pid62965 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63023 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63092 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63126 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63205 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63233 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63307 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63335 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63387 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63417 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63464 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63504 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63637 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63667 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63742 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63794 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63824 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63877 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63918 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63947 00:40:43.542 Removing: /var/run/dpdk/spdk_pid63982 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64016 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64051 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64087 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64116 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64151 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64185 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64221 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64250 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64284 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64319 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64348 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64388 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64417 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64460 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64492 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64521 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64562 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64621 00:40:43.542 Removing: /var/run/dpdk/spdk_pid64733 00:40:43.542 Removing: /var/run/dpdk/spdk_pid65154 00:40:43.542 Removing: /var/run/dpdk/spdk_pid68488 00:40:43.542 Removing: /var/run/dpdk/spdk_pid68822 00:40:43.542 Removing: /var/run/dpdk/spdk_pid71111 00:40:43.542 Removing: /var/run/dpdk/spdk_pid71471 00:40:43.542 Removing: /var/run/dpdk/spdk_pid71688 00:40:43.542 Removing: /var/run/dpdk/spdk_pid71739 00:40:43.542 Removing: /var/run/dpdk/spdk_pid72362 00:40:43.799 Removing: /var/run/dpdk/spdk_pid72793 00:40:43.799 Removing: /var/run/dpdk/spdk_pid72843 00:40:43.799 Removing: /var/run/dpdk/spdk_pid73198 00:40:43.799 Removing: /var/run/dpdk/spdk_pid73718 00:40:43.799 Removing: /var/run/dpdk/spdk_pid74169 00:40:43.799 Removing: /var/run/dpdk/spdk_pid75144 00:40:43.799 Removing: /var/run/dpdk/spdk_pid76104 00:40:43.799 Removing: /var/run/dpdk/spdk_pid76221 00:40:43.799 Removing: /var/run/dpdk/spdk_pid76283 00:40:43.799 Removing: /var/run/dpdk/spdk_pid77711 00:40:43.799 Removing: /var/run/dpdk/spdk_pid77915 00:40:43.799 Removing: /var/run/dpdk/spdk_pid83354 00:40:43.799 Removing: /var/run/dpdk/spdk_pid83795 00:40:43.799 Removing: /var/run/dpdk/spdk_pid83899 00:40:43.799 Removing: /var/run/dpdk/spdk_pid84045 00:40:43.799 Removing: /var/run/dpdk/spdk_pid84077 00:40:43.799 Removing: /var/run/dpdk/spdk_pid84123 00:40:43.799 Removing: /var/run/dpdk/spdk_pid84163 00:40:43.799 Removing: /var/run/dpdk/spdk_pid84313 00:40:43.799 Removing: /var/run/dpdk/spdk_pid84465 00:40:43.799 Removing: /var/run/dpdk/spdk_pid84686 00:40:43.799 Removing: /var/run/dpdk/spdk_pid84795 00:40:43.799 Removing: /var/run/dpdk/spdk_pid85049 00:40:43.799 Removing: /var/run/dpdk/spdk_pid85173 00:40:43.799 Removing: /var/run/dpdk/spdk_pid85309 00:40:43.799 Removing: /var/run/dpdk/spdk_pid85643 00:40:43.799 Removing: /var/run/dpdk/spdk_pid86053 00:40:43.799 Removing: /var/run/dpdk/spdk_pid86323 00:40:43.799 Removing: /var/run/dpdk/spdk_pid86810 00:40:43.799 Removing: /var/run/dpdk/spdk_pid86813 00:40:43.799 Removing: /var/run/dpdk/spdk_pid87163 00:40:43.799 Removing: /var/run/dpdk/spdk_pid87177 00:40:43.799 Removing: /var/run/dpdk/spdk_pid87191 00:40:43.799 Removing: /var/run/dpdk/spdk_pid87224 00:40:43.799 Removing: /var/run/dpdk/spdk_pid87233 00:40:43.799 Removing: /var/run/dpdk/spdk_pid87584 00:40:43.799 Removing: /var/run/dpdk/spdk_pid87632 00:40:43.799 Removing: /var/run/dpdk/spdk_pid87969 00:40:43.799 Removing: /var/run/dpdk/spdk_pid88205 00:40:43.799 Removing: /var/run/dpdk/spdk_pid88681 00:40:43.799 Removing: /var/run/dpdk/spdk_pid89259 00:40:43.799 Removing: /var/run/dpdk/spdk_pid90614 00:40:43.799 Removing: /var/run/dpdk/spdk_pid91205 00:40:43.799 Removing: /var/run/dpdk/spdk_pid91207 00:40:43.799 Removing: /var/run/dpdk/spdk_pid93125 00:40:43.799 Removing: /var/run/dpdk/spdk_pid93210 00:40:43.799 Removing: /var/run/dpdk/spdk_pid93306 00:40:43.799 Removing: /var/run/dpdk/spdk_pid93401 00:40:43.799 Removing: /var/run/dpdk/spdk_pid93560 00:40:43.799 Removing: /var/run/dpdk/spdk_pid93650 00:40:43.799 Removing: /var/run/dpdk/spdk_pid93746 00:40:43.799 Removing: /var/run/dpdk/spdk_pid93832 00:40:43.799 Removing: /var/run/dpdk/spdk_pid94177 00:40:43.799 Removing: /var/run/dpdk/spdk_pid94838 00:40:43.799 Removing: /var/run/dpdk/spdk_pid96204 00:40:43.799 Removing: /var/run/dpdk/spdk_pid96411 00:40:43.799 Removing: /var/run/dpdk/spdk_pid96684 00:40:43.799 Clean 00:40:43.799 13:24:56 -- common/autotest_common.sh@1451 -- # return 0 00:40:43.799 13:24:56 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:40:43.799 13:24:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:43.799 13:24:56 -- common/autotest_common.sh@10 -- # set +x 00:40:43.799 13:24:56 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:40:43.799 13:24:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:43.799 13:24:56 -- common/autotest_common.sh@10 -- # set +x 00:40:44.056 13:24:56 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:44.056 13:24:56 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:40:44.056 13:24:56 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:40:44.056 13:24:56 -- spdk/autotest.sh@394 -- # hash lcov 00:40:44.056 13:24:56 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:44.056 13:24:56 -- spdk/autotest.sh@396 -- # hostname 00:40:44.056 13:24:56 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:40:44.056 geninfo: WARNING: invalid characters removed from testname! 00:41:16.117 13:25:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:16.117 13:25:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:18.017 13:25:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:20.546 13:25:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:23.829 13:25:35 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:26.356 13:25:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:28.900 13:25:41 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:29.157 13:25:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:29.157 13:25:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:41:29.157 13:25:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:29.157 13:25:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:29.157 13:25:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.157 13:25:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.157 13:25:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.157 13:25:41 -- paths/export.sh@5 -- $ export PATH 00:41:29.157 13:25:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.157 13:25:41 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:41:29.157 13:25:41 -- common/autobuild_common.sh@444 -- $ date +%s 00:41:29.157 13:25:41 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721049941.XXXXXX 00:41:29.157 13:25:41 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721049941.Jjw8az 00:41:29.157 13:25:41 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:41:29.157 13:25:41 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:41:29.157 13:25:41 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:41:29.157 13:25:41 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:41:29.157 13:25:41 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:41:29.157 13:25:41 -- common/autobuild_common.sh@460 -- $ get_config_params 00:41:29.157 13:25:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:41:29.157 13:25:41 -- common/autotest_common.sh@10 -- $ set +x 00:41:29.157 13:25:41 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:41:29.157 13:25:41 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:41:29.157 13:25:41 -- pm/common@17 -- $ local monitor 00:41:29.157 13:25:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:29.157 13:25:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:29.157 13:25:41 -- pm/common@25 -- $ sleep 1 00:41:29.157 13:25:41 -- pm/common@21 -- $ date +%s 00:41:29.157 13:25:41 -- pm/common@21 -- $ date +%s 00:41:29.157 13:25:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721049941 00:41:29.157 13:25:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721049941 00:41:29.157 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721049941_collect-vmstat.pm.log 00:41:29.157 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721049941_collect-cpu-load.pm.log 00:41:30.090 13:25:42 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:41:30.090 13:25:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:41:30.090 13:25:42 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:41:30.090 13:25:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:41:30.090 13:25:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:41:30.090 13:25:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:41:30.090 13:25:42 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:30.090 13:25:42 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:41:30.090 13:25:42 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:41:30.090 13:25:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:41:30.090 13:25:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:41:30.090 13:25:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:41:30.090 13:25:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:41:30.090 13:25:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:30.090 13:25:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:41:30.090 13:25:42 -- pm/common@44 -- $ pid=133389 00:41:30.090 13:25:42 -- pm/common@50 -- $ kill -TERM 133389 00:41:30.090 13:25:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:30.090 13:25:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:41:30.090 13:25:42 -- pm/common@44 -- $ pid=133390 00:41:30.090 13:25:42 -- pm/common@50 -- $ kill -TERM 133390 00:41:30.090 + [[ -n 5159 ]] 00:41:30.090 + sudo kill 5159 00:41:31.028 [Pipeline] } 00:41:31.042 [Pipeline] // timeout 00:41:31.047 [Pipeline] } 00:41:31.059 [Pipeline] // stage 00:41:31.064 [Pipeline] } 00:41:31.080 [Pipeline] // catchError 00:41:31.089 [Pipeline] stage 00:41:31.091 [Pipeline] { (Stop VM) 00:41:31.104 [Pipeline] sh 00:41:31.378 + vagrant halt 00:41:35.567 ==> default: Halting domain... 00:41:40.840 [Pipeline] sh 00:41:41.117 + vagrant destroy -f 00:41:45.301 ==> default: Removing domain... 00:41:45.317 [Pipeline] sh 00:41:45.598 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_4/output 00:41:45.608 [Pipeline] } 00:41:45.628 [Pipeline] // stage 00:41:45.635 [Pipeline] } 00:41:45.654 [Pipeline] // dir 00:41:45.660 [Pipeline] } 00:41:45.680 [Pipeline] // wrap 00:41:45.687 [Pipeline] } 00:41:45.706 [Pipeline] // catchError 00:41:45.741 [Pipeline] stage 00:41:45.761 [Pipeline] { (Epilogue) 00:41:45.777 [Pipeline] sh 00:41:46.057 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:52.627 [Pipeline] catchError 00:41:52.629 [Pipeline] { 00:41:52.643 [Pipeline] sh 00:41:52.921 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:53.179 Artifacts sizes are good 00:41:53.187 [Pipeline] } 00:41:53.204 [Pipeline] // catchError 00:41:53.215 [Pipeline] archiveArtifacts 00:41:53.222 Archiving artifacts 00:41:53.373 [Pipeline] cleanWs 00:41:53.383 [WS-CLEANUP] Deleting project workspace... 00:41:53.383 [WS-CLEANUP] Deferred wipeout is used... 00:41:53.388 [WS-CLEANUP] done 00:41:53.390 [Pipeline] } 00:41:53.406 [Pipeline] // stage 00:41:53.410 [Pipeline] } 00:41:53.423 [Pipeline] // node 00:41:53.428 [Pipeline] End of Pipeline 00:41:53.562 Finished: SUCCESS